So, you think you have an AI strategy? Think again.
fully autonomous systems to emerge; when
and how this evolution may occur is
unknown. Given this uncertainty, there is a
growing momentum towards increasing the
understanding of the social impact of AI.
Therefore, we seek to help organizations
effectively and affordably characterize the
plausible systemic risks and benefits of AI
before they are deployed into a community
in order to mitigate risks such as growing
fear of AVs, 4 as well as to identify novel
opportunities such as features that balance
privacy and security. SIFA can be performed
using publicly available data and
independent of whether an organization has
the resources to sponsor or participate in
university research, standards or consortia.
O PPORTUNITY S PACE
There is a growing spectrum of initiatives
aimed at characterizing the impact of AI
from different perspectives. We will briefly
share an overview of three different types of
organizations:
Berkeley Center for Human-
Compatible AI - university research
center
Partnership on AI - independent
consortia
IEEE Ethics on Autonomous Systems
- standards association
Berkeley Center for Human-Compatible AI
The Berkeley Center for Human-Compatible
AI is a university research center founded in
2016. It brings researchers together from
UC-Berkeley and university partnerships.
Table 1: Berkeley Center for Human-Compatible AI
4
https://www.smartcitiesdive.com/news/aaa-poll-autonomous-vehicles-fear/550766/
- 53 -
June 2019