BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Sabre//Sabre VObject 4.5.8//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Europe/Zurich
X-LIC-LOCATION:Europe/Zurich
TZURL:http://tzurl.org/zoneinfo/Europe/Zurich
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19810329T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19961027T030000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:news1929@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20260206T174818
DTSTART;TZID=Europe/Zurich:20260127T161500
SUMMARY: The Practice-Research Gap in AI Threat Modeling
DESCRIPTION:Abstract: Cybersecurity ensures the trustworthy and reliable fu
 nctioning of digital systems. Currently\, companies spend about 10% of the
 ir IT budget on cybersecurity. Thus\, security and threat modelling become
  increasingly relevant also for technologies in artificial intelligence. H
 owever\, existing AI threat models have faced criticism regarding their pr
 acticality. Common issues include\, but are not limited to\, unrealistic a
 ssumptions\, a focus on isolated models rather than full AI pipelines\, an
 d perturbation techniques that lack real-world applicability.\\r\\nTo addr
 ess these gaps\, one approach is to measure how AI systems are deployed in
  practice and assess their exposure to known attack vectors. An orthogonal
  strategy involves collecting empirical data on real-world AI incidents th
 rough systematic reporting. Lastly\, AI applications can be threat modelle
 d from the earliest stages. To this end\, we examine a real-world applicat
 ion of AI\, the electric grid\, and review caveats and implications for AI
  security.\\r\\nDr. Kathrin Grosse [https://research.ibm.com/people/kathri
 n-grosse]
X-ALT-DESC:<p>Abstract: Cybersecurity ensures the trustworthy and reliable 
 functioning of digital systems. Currently\, companies spend about 10% of t
 heir IT budget on cybersecurity. Thus\, security and threat modelling beco
 me increasingly relevant also for technologies in artificial intelligence.
  However\, existing AI threat models have faced criticism regarding their 
 practicality. Common issues include\, but are not limited to\, unrealistic
  assumptions\, a focus on isolated models rather than full AI pipelines\, 
 and perturbation techniques that lack real-world applicability.</p>\n<p>To
  address these gaps\, one approach is to measure how AI systems are deploy
 ed in practice and assess their exposure to known attack vectors. An ortho
 gonal strategy involves collecting empirical data on real-world AI inciden
 ts through systematic reporting. Lastly\, AI applications can be threat mo
 delled from the earliest stages. To this end\, we examine a real-world app
 lication of AI\, the electric grid\, and review caveats and implications f
 or AI security.</p>\n<p><a href="https://research.ibm.com/people/kathrin-g
 rosse">Dr. Kathrin Grosse</a></p>
END:VEVENT
END:VCALENDAR
