Prescott perceives the great scientific-theoretical challenge in the development of a theoretical framework he described as "critical data studies": "Big Data needs Big Theory!" The goal must be to bring about the "humanization of Big Data". This is because data is not reality; rather, it is drawn from observation. Prescott cited the Glaswegian archaeologist Jeremy Huggett: "Data is theory-laden, and relationships are constantly changing, depending on context”[5], and then went on to list the seven-point catalogue for Critical Data Studies[6] developed by Craig Dalton and Jim Thatcher, including: data must be located in time and space; they must be grasped as inherently political and serving vested interests; they could never speak for themselves and, with this in mind, there can be no such thing as "raw data".
In his talk about the Trumpf Werkzeugmaschinenbau GmbH which bore the title "Data-Value Services as a Differentiator for Machine Tools", Stephan Fischer(Ditzingen) delivered some surprisingly deep insights into industry. A former Head of Department at SAP, in 2014 he became the corporation’s director responsible for IT at Trumpf. The company is specialized in laser technology, and his task is to lead it into the new networked digital age. Whereas in prior initial stages the focus was on linking the physical with the virtual world, and via sensors to check the quality of the laser needle ("smart data"), the task today aims at optimizing the entire production system on the basis of mass produced data and machine learning techniques ("smart factory") – in future, however, the focus will be moved to developing the Internet of Services into a business model. In the ongoing process of digitalization, some essential questions still have to be resolved, e.g. how to transform analogue data into a digital form, how data is to be administered, and how data can be safely transferred from the customer to Trumpf – or from Trumpf to research institutions. In the Smart Data Innovation Lab, Trumpf and other private-sector partners are working together with researchers to find out when maintenance falls due, for instance. According to Fischer, industry hopes to benefit from the strategic advantages generated by exchanging data with researchers.
In making data available to researchers, the leader of the Institute for Employment Research Stefan Bender (Nuremberg) also sees advantages for the planning of future political policies, and thus indirectly for Germany’s branding. In his talk "Researcher Access, Economic Value and the Public Good" he called for the development of documentation standards, a definition of data reproducibility, and above all for a fitting way to deal with errors that may occur when using Big Data. In addition to this, Bender addressed the difference between "made data"/"designed data" and "found data"/"organic data". These forms do not, however, compete with each other but rather can be brought together in a complementary manner: For although Big Data may be cheaper to generate, the opposite is the case when having to remedy errors. Bender interpreted the oil metaphor anew: Data is also capable of causing catastrophic damage in the same way as an oil spill.
According to Dirk Helbing (Zürich), a physicist and holder of a sociology chair, there is currently an imbalance between the knowledge we have gained about nature and what we know about society. He posed the question: "How can we build a smart resilient digital society?”[7]. Big Data may be able to help us redress this imbalance. In order to do so, Helbing imagines a world in which a number of widespread and self-organized systems are subjected to a decentralized control or intelligence in order to reach decisions on the basis of data. In his view, such a "Planetary Nervous System" together with a "Living Earth Simulator" capable of simulating the different changes and influences on the world, might unveil fundamental insights into our society. At the same time, Helbing pointed out that data also has a "best before" date as certain datasets lose their value after only a short while. This would most certainly apply to some of the Twitter messages posted during the conference under the hash tag #HKBigData.
A technical aspect of Big Data was elucidated by Shivakumar Vaithyanathan from IBM Big Data Analytics (San José), who began with naming three different Big Data problem issues: 1) Questions arising from the sheer volume of data; 2) Questions answered by a large number of models covering different aspects, and; 3) Questions on which only small amounts of data exist but which via simulations give rise to vast amounts. These challenges are currently being addressed by data scientists who extract insights from large amounts of data. In order to do so, data scientists must have cognizance of both worlds (the still "normal" IT world and that of Big Data) and be capable of translating and mediating between the two worlds. The big goal of Big Data Analytics therefore is to carry out this translation automatically and thus transform the idea of the data scientist automatically to the world of software environments like Hadoop and Co.
Several time windows of the Herrenhausen Conference were set aside for 29 junior researchers from16 different countries for whom the Foundation provided travel grants. They were given the opportunity to present their research projects from different disciplines in three-minute lightning talks. At the end of the Herrenhausen Conference their talks and poster presentations were awarded prizes based on the votes of the conference participants. Historian Ian Milligan (University of Waterloo) received the prize for the best presentation for his description of the project "Finding Community in the Ruins of GeoCities"; the best poster prize went to social scientist Josh Cowls (Oxford) for "Using Big Data for Valid Research: Three Challenges".
The section of the conference dedicated to legal issues turned out to be a most lively one. Big Data very often comprises data for which researchers have not received (and might not receive at all) any informed consent of those who are providing it. This is the point economist Julia Lane (Strasbourg/Melbourne) [8] picked up in her talk "Big Data, Science Policy, and Privacy". One has to be aware that analysis of Big Data might lead to completely wrong results – a thesis that Julia Lane illustrated with the events surrounding the Boston bombing, whereby an innocent man committed suicide after being wrongly accused of the attack as result of Big Data analysis. This gives rise to a legal problem: "What is the legal framework for data on human beings?" The principle of informed consent in the USA laid down in the so-called Common Rule for the protection of human research subjects is today merely a fiction – as in times of Big Data anonymizing data is no longer an option. More often than not, individual persons have absolutely no idea of the data stored about them, and that as a result of this data they can be identified at any time. How, then, will it be possible to carry out any social scientific research in future? Julia Lane called for a round table discussion at which representatives of research institutions, funding organizations, and public authorities agree on a roadmap for tackling this problem.
In his talk bearing the title "From Alibaba to Abida: Legal Issues concerning Big Data", the legal scholar Thomas Hoeren (Münster) shared the view that in today’s world it is no longer possible to obtain people’s "informed consent". In times of Big Data there is hardly any data that is not personalized. He described the German legal ruling on the Schufa [credit investigation agency] as the only existing example of correct legislation on Big Data. This is, first, because it prescribes scientific standards when dealing with data and, second, because it provides for transparency: Citizens have the right at all times to request information about the Schufa data stored on their person. Hoeren also addressed a number of other issues: Who bears the liability for incorrect data? Does data give rise to property rights – and if so, to whom do they belong? What about personality rights? Which role is played by the two great legal traditions of Anglo-Saxon Common Law and Roman Law? Big Data, according to Hoeren, will impact on the whole legal framework of our society. Hoeren is participating in a project funded by the German Ministry for Education and Research called "Assessing Big Data" (ABIDA) – hence the title of his talk. The aim of this project is to observe and monitor the multifaceted developments associated with applied Big Data.
Hoeren’s skepticism with regard to the current state of affairs in times of Big Data was also shared by his colleague Nikolaus Forgó (Hanover). His talk bore the provocative title: "Ignore the Facts, Forget the Rights: European Principles in an Era of Big Data". Forgo referred to the so-called "Volkszählungsurteil" [census ruling] of Dezember 15, 1983. This landmark ruling of the German Constitutional Court established the basic right of informational self-determination which follows from the general right of personality and human dignity. The judgment is generally regarded as a milestone in the domain of data protection and found its way into the Charta of Fundamental Rights of the European Union in Article 7 and especially Article 8(2): Personal data "must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law. Everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified." But what is today’s reality? It is characterized by individuals’ loss of control over "their" data, and thus by a loss of self: "If the product is for free, you are the product". According to Forgo, three problem areas must be addressed at the same time and clarified on an international level: Issues pertaining to property rights, respect of privacy, and copyright.