The Kyndryl Interactive Institute Journal Issue 1 | Page 86

An investment mindset shift
Technology is not neutral
AI has the potential to foster significant positive change across a wide range of sectors. Through my work, I have the privilege of seeing some of the most innovative and impact-focused responsible AI use cases from around the world. However, for AI to deliver on its promise and for it to truly serve our economy and society, we need more funding for these projects, we need to rethink the skills required for the AI workforce, and frankly, we need more women in AI leadership roles.

An investment mindset shift

Responsible AI practices have been linked to improved customer trust, loyalty and other business outcomes. I believe the business case for impactfocused applications of AI is just as compelling. And there’ s no shortage of thoughtful, proven applications of AI to address realworld problems. But when you look at what ideas and innovations are getting funded, you have to wonder about the priorities.
We urgently need a mindset shift: venture capital avoids investing in impact-focused AI innovations, mainly because of slower returns, unclear monetization and( perceived!) hard-to-measure out-
44 Bold ideas to power progress
comes. Also, top talent follows higher compensation at commercial ventures, and impact applications face data-access hurdles and complex regulatory environments. For these reasons, among others, fewer than 1 % of venture capital goes into impact-driven applications of AI. 4 And that is problematic.
I’ m not suggesting that we must choose between technology for profit and technology for“ good”. In fact, I would argue that both can and should go hand in hand. However, as we boldly step into the era of AI, I’ m asking: are we prioritizing tech for profit over tech for humanity? Are we investing enough in a foundation for real, meaningful progress? And what is needed for us to fully embrace the potential of AI done well?

Technology is not neutral

My background is in academia, policy and innovation, and I tend to think about challenges in a very systematic way.
So, beyond what we train AI to do, there’ s also the how and the who. AI is a reflection of our societies. If AI tools reflect what we put into them, we also have a responsibility not to duplicate or amplify the prejudices that exist in our society. Technology is not neutral because we are not neutral. This is also true of AI. If we are not mindful, we risk creating tools that perpetuate inequality and bias, capitalize on our vulnerabilities and exclude rather than include. To avoid this, people with different backgrounds, perspectives and experiences must work together to build these tools and develop their applications.
Part of this is dispelling the myth that one must have a technical background to contribute to AI. Of course, we need coders and programmers and data scientists. But if we’ re positioning AI for systemic impact, we also need the people who understand situations on the ground. We need to invite contributions from people with backgrounds in healthcare, conflict management, climate