The project would aim to resolve the problem of finding specific information in aggregated social network content such as thousands of tweets whereby the majority may show duplicate/redundant or irrelevant information. Finding this information quickly requires both time and effort depending on the individual case. Currently, an individual would either have to sort through the content in linear time until they found the information they wish (if it even exists) or wait until unique information is collated by the likes of the media. The project would aim to extract and filter unique information from topics using streaming API???s to build dynamic pages of user driven/crowd sourced information; which would update as new data is available in real-time. While the legitimacy of the information being built would mimic that of a public wiki, to reduce the impact of false information, measures would have to be put in place such as: prioritising information from more-regarded sources and the number of occurrences of the same information. It is important to note, that also the problem described above surrounds the idea of a user wanting to find specific information, the project would also be useful to those browsing at leisure to view fresh information as it is posted. Some example instances of the projects use: Collating information from conferences such as E3 or Apple conferences as Twitter users tweet details as they are released. Finding specific details surrounding the passing of Whitney Houston in the minutes after the announcement (as many tweets may contain condolences it may be difficult to find details such as the hotel she stayed at). Aid in generating incident maps of large spread crimes such as the London riots. Provide what-where-why-who information on trending topics.