『8. AI and Decentralisation: Own AI or Be Owned By It』のカバーアート

8. AI and Decentralisation: Own AI or Be Owned By It

8. AI and Decentralisation: Own AI or Be Owned By It

無料で聴く

ポッドキャストの詳細を見る

このコンテンツについて

In this episode, Max Sebti, co-founder and CEO of Score, challenges the centralised control of computer vision systems and makes the case for decentralised AI as a matter of public interest.

Max brings experience from AI data annotation and model development, where he witnessed how closed systems collect and control vast amounts of visual data. Now at Score, running on the Bittensor network, he's building "open source computer vision" - systems that are publicly verifiable, permissionless, and collectively owned rather than corporately controlled.

His central argument: we face a choice between "own AI or be owned by AI." As computer vision expands from sport into healthcare, insurance, and public surveillance, who controls these systems becomes existential. Max argues citizens should have access to model weights and training data as a democratic necessity.

We explore what decentralisation means in practice: how Bittensor's incentive mechanisms unlock talent and data centralised systems can't access, why open source doesn't sacrifice performance, and the stark reality that camera systems are making decisions about you based on models you cannot see.

Max introduces competing visions: a "Skynet" scenario where private entities own all visual data, versus a "solar punk" future of abundant energy and AGI where open AI serves collective benefit. The difference? Transparency, accountability, and public ownership.

The conversation tackles thorny questions: where should boundaries exist in open systems? How do you prevent misuse whilst maintaining accessibility? Max admits his team hasn't solved this - decentralised AI means thousands of contributors with different values building toward the same goal.

Max closes with a call to action: push for open source AI models where people can verify, query, and hold systems accountable. His vision moves AI from corporate product to public utility - not because it's idealistic, but because the alternative is too dangerous.

AI Ethics Now

Exploring the ethical dilemmas of AI in Higher Education and beyond.

A University of Warwick IATL Podcast

This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

We will discuss:

  • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
  • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
  • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

まだレビューはありません