Modern computational methods provide powerful new tools for conducting research and assisting with important decisions, but they must be applied judiciously so that we do not draw the wrong conclusions. For example, classifiers may be inadvertently trained on biased data, and the use of difficult-to-interpret machine learning methods may make it hard to determine if and when we should be confident in the results.
Such issues are becoming increasingly relevant to research in the humanities and social sciences, as well as to society more generally. In the humanities, the relationship between nontechnical expertise, technical expertise, and “machine expertise” is particularly contentious.
Under what conditions should we trust an algorithm’s results in a decision-making context, and when should we be more sceptical? What roles do (or should) nontechnical, technical, and machine expertise play in the decision-making processes of researchers in the social sciences and humanities, and how can they be applied so as not to lead toward unsupportable conclusions? More broadly, what are the opportunities and challenges posed by machine-assisted and machine-influenced decision making today, and what further opportunities and challenges may lie in store in the future?
Attendees should expect to come away with the workshop with a stronger understanding of different perspectives on the benefits and limitations of applying machine learning techniques to help answer questions of interest to researchers in the social sciences, humanities, and to society more broadly. The workshop will also discuss practical techniques for making the most of the power of computational methods for text analysis (e.g. neural networks, topic models, and vector space models) while limiting the potential downsides.
Booking is essential. Please click here or use the online regsitration link on this page.