Journal

What I learned about AI at Black Tech Fest 2024

Written by Tenika Blake | Oct 29, 2024 9:01:33 AM

Black Tech Fest 2024 provided insights into how companies can adopt artificial intelligence (AI) in a way that’s sustainable, inclusive, and impactful.

The conference took place in London on 10 October.

It calls itself “Europe’s largest diverse tech event” and covered a wide range of topics beyond AI – although AI certainly featured heavily on the agenda.

There were talks on how to integrate AI into organisations effectively taking into account ethics, diversity, and long-term success.

Before I get into how companies should think about adopting AI here’s why they need to do it: because their people are using AI anyway.

In fact, a survey by SalesForce found that 50% of people are using unapproved tools at work. That might mean, for example, uploading sensitive client information to ChatGPT.

That’s a strong motivator for organisations to introduce approved AI tools that are secure and properly managed.

Here’s what I learned about how that should work in practice.

Align AI with business goals

The first step companies need to take is aligning AI initiatives with their overall business strategy.

AI can’t be a separate, standalone project. It must address real business challenges and opportunities.

Before jumping into AI adoption, businesses should ask themselves:

  1. What problems are we trying to solve?
  2. How will AI improve efficiency or create value in the long term?

AI’s return on investment (ROI) shouldn’t be measured by immediate gains or speed, but in its ability to drive efficiency and improve workflows over time.

Like the analogy of a marathon rather than a sprint, success with AI will come through thoughtful, measured implementation that keeps the business’s ultimate goals in focus.

Prioritise inclusivity: engage those furthest from AI

One of the most important takeaways from the event was the emphasis on inclusivity in AI adoption. AI isn’t just about technology, it’s about people.

Companies need to engage with the communities and individuals who will be most impacted by AI –especially those furthest from adopting it.

This includes creating opportunities for those from underrepresented backgrounds to contribute to AI development, to ensure that AI systems reflect a wide range of perspectives.

AI isn’t just for tech experts. It’s for everyone. By involving diverse voices early on, companies can avoid building biased or incomplete systems.

Building AI literacy and upskilling

A big theme at Black Tech Fest was the need to build AI literacy, both within organisations and across society. AI isn’t something only data scientists or tech teams need to understand.

To get the most from AI, all employees should have a basic understanding of how it works and how it can be used in their roles.

This means investing in upskilling programs for staff, from the front lines to senior leadership.

AI should be seen as a tool to enhance applications across the business.

Governance and ethics: Setting the right standards

While AI governance is part of the adoption process, it’s essential to get it right from the start.

Companies should set up governance frameworks that make sure AI is used ethically and responsibly.

This involves having clear guidelines around data usage, privacy, and bias.

At Black Tech Fest, one idea that resonated with me was the importance of having dedicated AI leads for each department.

These leads are responsible for overseeing AI implementation, understanding the ethical concerns specific to their area, and ensuring that AI systems are fair and transparent.

Governance also means continuously testing AI systems with diverse data sets.

A key point raised was the need to ship AI products in beta stages and test them with underrepresented groups. This helps identify any bias or gaps in the data early, allowing for adjustments before full release.

Test, learn, and iterate – embrace the ‘fail fast’ approach

Another major lesson is the importance of rapid testing and iteration when adopting AI.

Rather than spending years perfecting an AI system before launch, companies should embrace a ‘fail fast’ mindset.

This involves testing early prototypes, learning from the results, and iterating quickly. It’s an agile approach that allows for faster improvements and reduces the risk of investing in unsuccessful ideas.

Getting input from diverse communities and users early in the development process is crucial.

By doing this, companies can understand the real-world implications of their AI systems and make adjustments before rolling them out more widely.

This approach also helps to build a culture of continuous learning and improvement, which is essential for long-term AI success.

Diversity in data and development: from consumers to producers

AI adoption also requires a shift in thinking about who contributes to AI development.

Instead of being passive consumers of AI products, underrepresented communities should be given the tools and opportunities to become producers.

Companies need to make sure that the data used to train AI systems is representative and inclusive.

This can only happen if diverse voices are involved from the beginning, helping to shape the datasets and models used.

Corporate social responsibility (CSR) initiatives can play a huge role here.

By supporting digital literacy and skills programs in underserved communities, businesses can create pathways for people from underrepresented backgrounds to become AI creators.

It’s about more than just addressing bias – it’s about creating opportunities for a broader group of people to lead in AI development.

One of the panellists spoke about the importance of educating children and young people about AI and suggested we should be teaching children about AI from primary school age.

Storytelling and AI: making data relatable

One unexpected insight I gained at Black Tech Fest was about the importance of storytelling in AI adoption.

Data is only valuable if it tells a clear story that's easy to understand. Otherwise, to most people, it’s just numbers and context-free information.

When we build AI workflows, the data that feeds into AI models needs to be structured in a way that mirrors how we communicate in real life.

If the data doesn’t make sense to people who will use or be impacted by AI, the systems will struggle to deliver value.

Collaboration between user-centred designers, data scientists, and AI specialists is crucial here.

By working together, they can make sure that AI systems learn from data that is structured in a meaningful and human-centred way.

This not only improves the functionality of AI systems but also makes them more relatable and user-friendly.

Conclusion: a human-centred approach to AI

Black Tech Fest 2024 highlighted that AI adoption isn’t just about technology – it’s about people, strategy, and inclusivity.

To adopt AI effectively, companies must align their AI projects with their overall business goals, prioritise inclusivity, and make sure they are continuously learning and adapting.

Involving diverse voices from the start, building AI literacy, and focusing on ethical governance will make AI adoption smoother and more impactful.

Ultimately, AI’s success isn’t measured by how fast it can process data, but by how well it helps companies make better decisions and become more efficient and productive over time.

AI has the potential to positively change the way we work and live, but only if we approach it thoughtfully, with inclusivity and long-term value in mind. This is the future of AI that I’m excited to see unfold.