BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Sabre//Sabre VObject 4.5.8//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Europe/Zurich
X-LIC-LOCATION:Europe/Zurich
TZURL:http://tzurl.org/zoneinfo/Europe/Zurich
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19810329T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19961027T030000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:news2009@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20260323T160841
DTSTART;TZID=Europe/Zurich:20260324T161500
SUMMARY:Rethinking Fairness for LLMs and Agentic AI
DESCRIPTION:Abstract: Artificial intelligence systems increasingly shape ev
 eryday life\, influencing decisions in domains such as hiring\, education\
 , healthcare\, and content generation. A growing body of evidence shows th
 at these systems do not perform equally across populations: they often ach
 ieve lower predictive accuracy for sensitive groups\, amplify historical b
 iases present in data\, and\, in the case of generative models\, produce s
 tereotypical or skewed representations. In this talk\, I will briefly cov
 er how fairness evolved from predictive ML to generative AI and then move 
 to a new class of fairness problems that emerge with large language models
  (LLMs) and increasingly agentic AI systems. These new settings rely on co
 mmunication and dialogue in natural language\, and sequential decisions th
 at shape user experience and downstream outcomes. I will illustrate how th
 is perspective opens new directions for evaluating and regulating fairness
  in AI\, by extending existing notions of distributional and procedural fa
 irness with the concept of interactional fairness.Dr. Ruta Binkyte [https:
 //www.rutabinkyte.com/]
X-ALT-DESC:<p>Abstract: Artificial intelligence systems increasingly shape 
 everyday life\, influencing decisions in domains such as hiring\, educatio
 n\, healthcare\, and content generation. A growing body of evidence shows 
 that these systems do not perform equally across populations: they often a
 chieve lower predictive accuracy for sensitive groups\, amplify historical
  biases present in data\, and\, in the case of generative models\, produce
  stereotypical or skewed representations.&nbsp\;<br />In this talk\, I wil
 l briefly cover how fairness evolved from predictive ML to generative AI a
 nd then move to a new class of fairness problems that emerge with large la
 nguage models (LLMs) and increasingly agentic AI systems. These new settin
 gs rely on communication and dialogue in natural language\, and sequential
  decisions that shape user experience and downstream outcomes. I will illu
 strate how this perspective opens new directions for evaluating and regula
 ting fairness in AI\, by extending existing notions of distributional and 
 procedural fairness with the concept of interactional fairness.<br /><br /
 ><a href="https://www.rutabinkyte.com/">Dr. Ruta Binkyte</a></p>
END:VEVENT
END:VCALENDAR
