Abstracts
Kate Soper
Given the multiple crises we now face, a major shift in thinking about the nature of human prosperity and the qualities of the ‘good’ life is much needed. Though presented as the model to be pursued by less developed communities, affluent consumer culture continues to enrich a global elite at the expense of the health of the planet and the well-being of large numbers of its inhabitants. Consumer culture has improved living standards in certain respects, and exerts a powerful influence. But it has also proved seriously detrimental to well-being, and many of those who are questioning its less beneficial effects, and uneasy about its negative social and environmental impact, are also now seeking better cultural and political representation of their viewpoint. In pressing the case for a compelling – and more mainstream - challenge to prevailing economic orthodoxy and its discourse on human value, I shall provide a digest of my argument on ‘alternative hedonism’; briefly comment on some objections to which it is open, and reflect on its potential political impact – including in helping to offset the defiant inaction on climate change of the populist right.
Josée Johnston
Title: Pleasure, Paradox, and the Cultural Politics of Happy Meat Consumption
Even as concerns about climate change, animal welfare, and health mount, meat remains one of the most pleasurable and culturally significant foods in Europe and beyond. Drawing on recent research from my co-authored book Happy Meat, this talk examines how consumers and producers navigate the “meat paradox”—the tension between the enjoyment of eating meat and the discomfort of its environmental and ethical costs. Using Canadian survey data, focus groups, and interviews with small-scale farmers, I explore how “happy meat” narratives—framing meat as high-welfare, sustainable, and local—offer a partial resolution to this tension. While such narratives can reduce individual guilt and inspire shifts toward “less meat, better meat,” they also risk obscuring the scale of transformation needed to meet climate and sustainability targets. Connecting these insights to the European context, where policies promoting sustainable consumption are evolving, I reflect on how cultural understandings of pleasure and responsibility shape the politics of dietary change. My goal is to provoke discussion about how we might reconcile the pleasures of eating with the urgent demands of planetary stewardship.
Verena Fuchsberger
Human-Computer Interaction (HCI) has long focused on human-centered design, asking how technology can best support people in their everyday and work life, with the aim to understand and give shape to desirable digital-physical realities. However, the world we live in today is shaped by challenges that extend far beyond human concerns. Issues such as climate change, poverty, or social marginalization remind us that humans are deeply entangled with wider ecological and technological systems. As a result, research and design in HCI are beginning to shift their focus beyond the human. This shift is often described through terms such as “more-than-human,” “posthuman,” or “transhuman.” These perspectives encourage us to think not only about humans, but also about the needs, values, and roles of nonhuman actors, whether plants, animals, algorithms, or infrastructures. The question is no longer just how technology can serve humans, but how humans and nonhumans are entangled. Drawing on examples from HCI I will reflect on what these philosophical positions mean for design and how they might guide us toward more sustainable, just, and livable futures.
Shengnan Han
When we speak of a “more-than-human” world, we imagine more than faster machines or smarter algorithms. We are entering a new condition of humanity, where artificial intelligence, biotechnology, and planetary systems shape who we are and who we might become. From a transhumanist perspective, this is not about replacing the human, but expanding it—our capacities, our horizons, and our responsibilities. This expansion forces us to ask: what human values will guide us? Technology does not simply serve human ends; it mediates and reshapes them. The task, then, is not only to align machines with human values, but to cultivate a relational ethic in which humans, posthumans, and nonhumans co-create meaning. Across cultures we can discern enduring goods—safety, dignity, fairness, care, creativity. These must anchor how we design and live with technology. In Europe, the aspiration has been framed as uniting new technologies with age-old values; globally, it is about resilience, justice, and sustainability. A transhumanist horizon is neither blind faith in machines nor nostalgia for the past. It is a conscious project of co-creation: learning to be better persons, so that together we may shape better worlds.
Deividas Petrulevičius
What does it mean to speak about human values at a time when algorithms are used in healthcare, automation is changing workplaces, and digital platforms shape how we connect, learn, and share information? Do we direct technologies with our values, or do they begin to reshape how we think about what is human? And if values are shifting, who has the authority to decide which ones continue and which ones are left behind? This presentation will open the discussion in the workshop Human Values in a More-than-Human World by looking at the intersections of human–technology interaction in industry, digital innovation, and beyond. Europe’s push in artificial intelligence, robotics, and data governance shows both the promise and the uncertainty of technological progress. Collaboration between Social Sciences and Humanities (SSH) and STEM is not an optional extra — it is central to making these changes meaningful and responsible. Values are not secondary; they steer technological progress and influence whose futures are supported and whose are not. The talk will raise questions that are not easy to answer: Can we assign responsibility to AI systems? What happens when technological goals clash with social priorities? Do we design systems that adapt to human needs, or will people adapt to the systems instead? The purpose is to open space for reflection on how Europe can create innovation that is not only advanced but also inclusive, responsible, and guided by values.
Charlotta Sparre
Since the dawn of times, scientific discoveries and technological innovation have always told us an inextricable story of progress and regress. The current scientific innovations – especially AI – present new opportunities, but also unprecedented risks. Too frequently history has shown how innovation disconnected by humanism is precursor of political decay and moral decadence. In times of rising populism and weakened rule of law, it is on these history lessons that the EU must anchor research to humanities disciplines, directed to serve humanity progress. In other words, scientific progress could enrich civilization only if inspired by human values. Cultures evolve but human values remain anchored to a few essential principles that are of moral nature: that human being are born free and equal, and deserve the opportunity to pursue a life of their choice, respecting the choices of others. A life free from want, discovering our talents, cultivating our skills and in turn contributing to societal progress. Democratic institutions play an essential role in protecting humanism and the values that sustain it. As imperfect as they are, they remain our best option to live in harmony with each other and with nature. The EU offers us a framework connecting technology, research, innovation, private sector capital and drive, academic knowledge, accountability, and above all respect to those human rights that remain the basis of our common humanity.
Eric Arnould
Green growth and techno-optimism are utopian delusions generated from within the existing dominant capitalist value paradigm. An alternative, well-being paradigm requires epochal changes in consumer values. We should start with recalling our ancestors ecosystemic value system embracing the non-human world. We then need to consider what mechanisms of resource circulation and value cocreation can get us to the 8 Rs envisioned by Serge Latouche (re-evaluate, re-conceptualize, re-structure, re-distribute, re-localize, reduce, re-use, and re-cycle) that will foster practices of care, justice, and well-being. I propose some foundational processes of resource circulation and value cocreation. Gift systems, management of commons, reciprocal exchange, and symbiosis are processes well-documented cross-culturally, historically, and biosemotically.
Anne Gerdes
When discussing the possibility of conscious AI, let’s not forget that human consciousness arises from embodied experience within a social world where outcomes matter to us. Machine learning systems – from simple regression models to advanced deep neural networks, including large language models fine-tuned through reinforcement learning with human feedback (RLHF) – process information through computation alone. One of the arguments supporting the idea that consciousness is computationally tractable claims that mental states arise from physical processes in the brain, which can be replicated computationally. Yet, human consciousness and moral agency emerge not merely from neural computation, but from our embodiment, our engagement with and dependence on others, and our lived experiences. We don't just process information about friendship, love, hate, or injustice – we feel them in ways that shape who we are and how we act. AI systems may convincingly simulate aspects of consciousness and moral reasoning. Still, they cannot experience what it means to be in a situation in which something is at stake. This does not rule out the development of AI systems with functional morality – rule-based or machine-trained behaviors that align with human values. However, statistical pattern matching, risk stratification, or next-token prediction in large language models remains fundamentally different from human moral judgment. The notion that AI might become conscious, or moral, confuses the simulation of these capacities with their embodied reality in a social world.
Sergio Salvatore
Human values guide social action not merely as explicit moral prescriptions, but performatively—through their immanence in everyday practices. Values are effective insofar as they are instituted: embodied in action, enacted unreflectively as states of fact that give meaning and direction to collective life. Today, however, this performative ground of values is weakening. Across all levels of public life, the social bond is increasingly structured around the “nemicalization” of alterity—visible in war narratives, migration policies, and polarized political discourse. The affective nature of this dynamic prevents it from being confined to relations with external outgroups: it tends to permeate internal social relations, generating divisions within communities themselves. As a result, the affective polarization that once defined external boundaries now erodes the inner cohesion upon which substantive democracy depends. Defending democracy, therefore, requires fostering forms of social action that inherently convey universalistic values. This, however, should not be seen as an alternative to, but as integrated with, the care for communitarian identity and emotional needs. This calls for a generative welfare model, the empowerment of intermediate processes between the private and public spheres, the promotion of active citizenship, and the recovery of institutions’ capacity to govern systemic transformations.
Mario Scharfbillig
Seen through the lens of behavioural sciences, values are not only rights but psychological constructs, shaped by foundational motivations, culture, emotions, and social identities. Seen through this perspective, universal principles like democracy or human rights will be understood and interpreted differently — not because of legal debates, but because of how individuals and groups perceive, prioritize, and defend what matters to them. Theories of values and morals show that people anchor their judgments in distinct value priorities, and group identity dynamics amplify these differences. Those dynamics can then turn values into markers of competing identities — nationally and internationally — rather than shared aspirations.
Combined with technology that is designed to drive engagement by showing people what they want to see, reality is increasingly fractured and polarised. The same value — freedom, equality, justice — becomes divisive when framed through competing narratives, even when support remains high in principle. The result is a clash not only of ideas, but of deeply held psychological needs and motivations: security, belonging, status, morality.
To navigate these conflicts, we must better understand the underlying drivers of pluralist perspectives, find better ways to navigate differences, and actively engage in different ways than just in social media battles. New ways of online and offline deliberation are needed. Importantly for SSH to play a role in this, research needs to go beyond describing these issues towards designing and testing tools, methods and approaches that can deal with this foundational plurality.
Mona Kanwal Sheikh
This presentation introduces worldview analysis as a framework for understanding international conflict through the lens of human values, historical experience, and collective self-understanding. It argues that many of today’s geopolitical tensions are not merely about power or resources, but about competing visions of order, justice, and moral legitimacy. By unpacking the worldviews that shape both Western and non-Western actors, the approach helps explain why diplomacy often fails and why certain regimes or conflicts endure despite global pressure. In a world where competing moral narratives and historical grievances increasingly define international relations, worldview analysis offers a framework for rethinking human values as both a source of division and a potential bridge for understanding. The presentation argues that in today’s post-Western order, genuine conflict resolution requires not moral superiority but interpretive empathy: to understand is not to defend, but to enable peace.
Wendy Chun
How and why do human values matter to AI? What role do the social sciences and humanities play in understanding and shaping predictive and generative models?
These questions are usually answered through the rubric of “ethics and AI,” which presumes that ethics is something that we add to AI to make it better. This also presumes that STEM and SSAH are fundamentally different. In contrast, this talk will reveal the importance of SSAH to understand how and why large language models work as a posthuman writing technology. Drawing from the similarities between models of language from Natural Language Processing and posthumanist theory, it will outline how SSAH and STEM might work together more deliberately to better understand our current models and how we can build more creative ones moving forward.
Maria Pilar Aguar Fernández
Since the establishment of Horizon 2020 in 2014, the European Commission has systematically integrated SSH into the Framework programme for research and innovation. The objective was to enhance societal impact, foster a human-centric approach to science and ensure science benefits European citizens. In July, the European Commission published the first SSH integration monitoring report for Horizon Europe. In the collaborative research (Pillar II), an amount of EUR 7.2 billion was invested in SSH Integration – in societal challenges such as democracy, culture, health, energy, transport, industry and agriculture. The report highlighted an increase in SSH related topics in these sectors but in some case little SSH integration. As we enter the final phase of Horizon Europe and prepare the next Framework programme,
it is the right moment to reflect how SSH integration can further increase the impact of research in addressing our era’s complex challenges. If researchers from STEM and SSH disciplines could find a common language, bridge the gap between them, and collaborate on joint outcomes, this could lead to even deeper SSH integration, inspiring a shift towards transdisciplinary research models. The European Commission is committed to facilitating this by connecting researchers, providing guidance, and identifying potential areas of cooperation. Together, we can ensure that our research advances knowledge while driving meaningful and sustainable societal progress.
Else Skjold
With a unique background in the cross-field between organisation and management studies, design research, and cultural studies, Else Skjold has been part of developing so-called wardrobe research that continues to unlock new opportunity spaces for systemic change.
Wardrobe research was developed from the mid-2000’s and onwards and is nested in the idea of investigating daily routines, practices and aspirations of dress practice. It is to a high degree involving the artefacts stored in the space of the wardrobe, hereby opening up for a deeper understanding of ways that engagement with the physical and experiental properties of garments link to ideas of self, as well as over-arching cultural narratives and socio-economic systems.
In her talk, Skjold will link her early work on wardrobe research to her current partaking in mission-based research on circular economy of textiles in the Danish national mission TRACE (TRAnsition towards a Circular Economy), as well as in the national action plan for textiles in Denmark (2025-28). She will address how SSH research is a vital key in this work for building the necessary systemic links between behaviour, economic logics and drivers, as well as larger cultural narratives that can point towards a better and more resilient future.
Gian Vittorio Caprara
In times in wich the personality of autocrats occupies most of political discourse personalizing politics is an invitation to turn scientific invetstigation to the personality of citizen to better understand the mental processes that stay at the basis of their choices and to evaluate the forms of government that are more congenial to the human condition . Previous findigs have corroborated a conceptual model in wich social values are the most proxi determinants of political preferences and political self efficacy beliefs of political action. Yet much variance remains unaccounted and one may doubt about the generalizability of findings that mostly derive from western countries. This led to revise early model and to aknowledge the crucial role that moral agency exerts in personality development and in facing the global challenges of our time.
Maria Leptin
In her address, Maria Leptin, President of the European Research Council, argues that Europe’s future depends as much on the insight of the social sciences and humanities as on advances in technology. Drawing on history, philosophy, and real-world examples, she makes a broad defence of curiosity-driven research and the intellectual freedom that sustains it.
The speech begins by challenging the assumption that progress is purely technological. Just as Einstein and Bohr transformed our understanding of nature, thinkers such as Marx, Weber, Schumpeter and Keynes reshaped how societies organise work, power, and exchange. Concepts like the rule of law, trial by jury, currency, and insurance are presented as “social technologies” — inventions that allow cooperation and trust. Science, Leptin argues, can describe how the world works, but only the humanities can ask what kind of world we want.
She warns that treating the social sciences and humanities as supporting actors for pre-defined policy goals risks diminishing their real value. Genuine interdisciplinarity, she suggests, means letting these disciplines set questions as well as answer them. Examples from de Beauvoir, Carson, and Arendt illustrate how major social transformations began not from compliance with policy priorities but from independent inquiry that challenged norms and power.
The second half of the speech turns to the future of European research. Leptin calls for research frameworks that preserve the autonomy of scholars while recognising the deep connections between knowledge and human values. The ERC model, she notes, shows that trusting researchers to pursue their own ideas produces both excellence and long-term societal benefit.
Looking ahead to the post-Horizon Europe era, she urges policymakers to resist narrow definitions of “impact” and to support a balanced ecology of knowledge — one that values reflection as much as invention. The social sciences and humanities, she concludes, are Europe’s means of self-understanding: they keep societies conscious of their values, choices, and responsibilities.
The speech closes with a reaffirmation of scientific freedom as a public responsibility — the condition that allows knowledge to remain honest, critical, and humane
Ewa Luger
Navigating Epistemic Threats in a GenAI landscape:
Democracy is foundational to the European Union. Not simply as a preference or tradition, but as a legal, structural, and identity-defining principle. The EU exists because of democracy, operates through democracy, and expands only to states that meet democratic standards. However, this principle is under threat. We are seeing the rise of a
set of methods, tools and systems whose collective impact has begun to erode the integrity of our democracies in unprecedented ways - Generative AI (GenAI). Increasingly embedded in the online platforms we have come to rely on, such systems have openly sidestepped contemporary moral values, allowing misinformation and disinformation at scale, and are threatening the epistemic ecosystem that enables informed democratic participation. This talk argues that the impact of GenAI on epistemic values has given rise to a new wave of insecurity that threatens our current conceptions of democracy. It explores some of the current epistemic challenges posed by the rise of GenAI and considers how we might collectively navigate an AI-mediated future.
Mihalis Kritikos
The talk will examine the need for a dedicated framework to assess ethics and value integration in artificial intelligence, highlighting why such a structure is essential for developing human-centered and trustworthy AI systems. It will outline the initial steps taken toward creating specialised guidelines and will situate these efforts within the broader policy initiatives of the European Union in this domain.




