Women in AI: Sarah Bitamazire helps companies implement responsible AI


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution.

Sarah Bitamazire is the chief policy officer at the boutique advisory firm Lumiera, where she also helps write the newsletter Lumiera Loop, which focuses on AI literacy and responsible AI adoption.

Before this, she was working as a policy adviser in Sweden, focused on gender equality, foreign affairs legislation, and security and defense policies.

Briefly, how did you get your start in AI? What attracted you to the field? 

AI found me! AI has been having an increasingly large impact in sectors that I have been deeply involved in. Understanding the value of AI and its challenges became imperative for me to be able to offer sound advice to high-level decision-makers. 

First, within defense and security where AI is used in research and development and in active warfare. Second, in arts and culture, creators were amongst the groups to first see the added value of AI, as well as the challenges. They helped bring to light the copyright issues that have come to the surface, such as the ongoing case where several daily newspapers are suing OpenAI. 

You know that something is having a massive impact when leaders with very different backgrounds and pain points are increasingly asking their advisors, “Can you brief me on this? Everyone is talking about it.” 

What work are you most proud of in the AI field?

We recently worked with a client that had tried and failed to integrate AI into their research and development work streams. Lumiera set up an AI integration strategy with a roadmap tailored to their specific needs and challenges. The combination of a curated AI project portfolio, a structured change management process, and leadership that recognized the value of multidisciplinary thinking made this project a huge success. 

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?  

By being very clear on the why. I am actively engaged in the AI industry because there is a deeper purpose and a problem to solve. Lumiera’s mission is to provide comprehensive guidance to leaders allowing them to make responsible decisions with confidence in a technological era. This sense of purpose remains the same regardless of which space we move in. Male-dominated or not, the AI industry is huge and increasingly complex. No one can see the full picture, and we need more perspectives so we can learn from each other. The challenges that exist are huge, and we all need to collaborate. 

What advice would you give to women seeking to enter the AI field?

Getting into AI is like learning a new language, or learning a new skill set. It has immense potential to solve challenges in various sectors. What problem do you want to solve? Find out how AI can be a solution, and then focus on solving that problem. Keep on learning, and get in touch with people that inspire you. 

What are some of the most pressing issues facing AI as it evolves?

The rapid speed at which AI is evolving is an issue in itself. I believe asking this question often and regularly is an important part of being able to navigate the AI space with integrity. We do this every week at Lumiera in our newsletter. 

Here are a few that are top of mind right now: 

  • AI hardware and geopolitics: Public sector investment in AI hardware (GPUs) will most likely increase as governments worldwide deepen their AI knowledge and start making strategic and geopolitical moves. So far, there is movement from countries like the U.K., Japan, UAE, and Saudi Arabia. This is a space to watch. 
  • AI benchmarks: As we continue to rely more on AI, it is essential to understand how we measure and compare its performance. Choosing the right model for a given use case requires careful consideration. The best model for your needs may not necessarily be the one at the top of a leaderboard. Because the models are changing so fast, the accuracy of the benchmarks will fluctuate as well. 
  • Balance automation with human oversight: Believe it or not, over-automation is a thing. Decisions require human judgment, intuition, and contextual understanding. This cannot be replicated through automation.
  • Data quality and governance: Where is the good data?! Data flows in, throughout, and out of organizations every second. If that data is poorly governed, your organization will not benefit from AI, point blank. And in the long run, this could be detrimental. Your data strategy is your AI strategy. Data system architecture, management, and ownership need to be part of the conversation.

What are some issues AI users should be aware of?

  • Algorithms and data are not perfect: As a user, it is important to be critical and not blindly trust the output, especially if you are using technology straight off the shelf. The technology and tools on top are new and evolving, so keep this in mind and add common sense.
  • Energy consumption: The computational requirements of training large AI models combined with the energy needs of operating and cooling the required hardware infrastructure leads to high electricity consumption. Gartner has made predictions that by 2030, AI could consume up to 3.5% of the world’s electricity. 
  • Educate yourself, and use different sources: AI literacy is key! To be able to make good use of AI in your life and at work, you need to be able to make informed decisions regarding its use. AI should help you in your decision-making, not make the decision for you.
  • Perspective density: You need to involve people who know their problem space really well in order to understand what type of solutions that can be created with AI, and to do this throughout the AI development life cycle. 
  • The same thing goes for ethics: It’s not something that can be added “on top” of an AI product once it has already been built — ethical considerations have to be injected early on and throughout the building process, starting in the research phase. This is done by conducting social and ethical impact assessments, mitigating biases, and promoting accountability and transparency. 

When building AI, recognizing the limitations of the skills within an organization is essential. Gaps are growth opportunities: They enable you to prioritize areas where you need to seek external expertise and develop robust accountability mechanisms. Factors including current skill sets, team capacity, and available monetary resources should all be evaluated. These factors, among others, will influence your AI roadmap. 

How can investors better push for responsible AI? 

First of all, as an investor, you want to make sure that your investment is solid and lasts over time. Investing in responsible AI simply safeguards financial returns and mitigates risks related to, e.g., trust, regulation, and privacy-related concerns. 

Investors can push for responsible AI by looking at indicators of responsible AI leadership and use. A clear AI strategy, dedicated responsible AI resources, published responsible AI policies, strong governance practices, and integration of human reinforcement feedback are factors to consider. These indicators should be part of a sound due diligence process. More science, less subjective decision-making.  Divesting from unethical AI practices is another way to encourage responsible AI solutions. 


👇Follow more 👇
👉 bdphone.com
👉 ultraactivation.com
👉 trainingreferral.com
👉 shaplafood.com
👉 bangladeshi.help
👉 www.forexdhaka.com
👉 uncommunication.com
👉 ultra-sim.com
👉 forexdhaka.com
👉 ultrafxfund.com
👉 ultractivation.com
👉 bdphoneonline.com

Leave a Comment