I enjoyed reading «The Equality Machine» by Orly Lobel. The book elaborates on the opportunity of «harnessing digital technology for a brighter, more inclusive future» that digitalisation and artificial intelligence offer to improve our world and to overcome historical biases and discrimination against women, races, or specific social collectives. Orly casts fresh optimistic messages, for example, that ML and AI can objectivise decision-making and help avoid existing biases and prejudices governing human resolutions. The book contains a large number of initiatives and research on AI applications to different surprising topics such as harassment prevention, healthcare, online dating, or sex, or just to serve as a companion or artificial complement to our lives.
Her approach substantially differs from that of many recent books that focus on the perils of improperly handled AI and ML algorithms. In «Weapons of Math Destruction«, Cathy O’Neil magisterially discusses the massive impact of biased machine learning algorithms on the lives of people asking for a loan, applying for a job, or receiving justice. In «Privacy is Power«, Carissa Veliz explores how our personal data are being collected and suggests practical steps to reduce our exposure to surveillance capitalism. Some years ago, in a polemic book, Jaron Lanier proposed that people should get paid for their data to make the Internet more sustainable and to avoid the socioeconomic problems attached to massive digitalisation.
In a way, I see this as the «dark» side of AI and digitalisation, the Yin, as opposed to tech companies, scholars, and analysts embracing new technology and developing promising use cases to make an impact in society, increase productivity, and contribute to economic growth, the Yang of AI. Like the Yin-Yang, both visions are at the same time opposing but complementary, and it is only by acknowledging and building on top of this duality that we will achieve a superior performance of AI.
Georg Wilhelm Friedrich Hegel believed in ‘dialectical’ evolutions of the course of history. A given status quo triggers a reaction in the opposite direction, to end up in a superior new status quo that synthesises and reconciles both views. For example, communism appears as a reaction to capitalism that ended up in current (more or less) social democracies.
As the duality of Yin and Yang, the Hegelian dialectic emphasises the importance and complementarity of two conflicting perspectives in building a new one that learns from both of them. The rise of communism was a consequence of the success (and excess) of capitalism and the need to ameliorate some of its adverse effects. Keynesian economics and social democratic societies picked and combined elements of both systems to end up in a new superior status quo.
Similarly, I think the discourse about the perils of AI is a sensible reaction to the initial peak of inflated expectations about this technology («Wow! AI rocks! It is great and beneficial. Let’s use it everywhere») and the massive ongoing efforts to create innovative AI systems for tons of use cases. This opposition, and the spread of know-how about how these systems work, led to concerns about AI systems respecting privacy, lack of transparency, and intellectual property infringement, among others. I think that «The Equality Machine» probably provides the most synthetic approach I have read on the topic so far.
Although Orly acknowledges that responsible development and regulation of AI are critical, she strongly believes that AI and machine learning can excel at making informed decisions if properly designed and implemented. In that, she seizes the opportunity to build an artificial machine to bring equality if we focus our research and development efforts on the right activities. She makes polemic proposals open for debate; for example, she advocates collecting sensitive data such as race or sex to better detect bias in data or algorithms.
And there goes my two cents. Since it is humans that design, feed, and operate AI models, it is only by ethically designing and building interpretable, explainable models that we will reach the next level in the development of AI technologies. This is easier said than done. Interestingly, we are already building (sometimes we even already have) the tools we need to do so. We have many Explainable AI techniques to understand why a model made a certain prediction, sometimes overlapping Interpretable AI methods to explain how they made such prediction, and there are a bunch of industry-powered tools for building trustworthy fair AI.
In conclusion, developing responsible ethical-by-design AI systems can lead us to the next level in the exploitation of this technology. In my opinion, this book supports the view of the European Union and many international organisms calling for the development of responsible, transparent and trustworthy AI. How can we move towards this objective? Who should be responsible for it? Who should be held accountable? Who should control? How? I leave these complex questions for future posts.
Thanks for reading! I hope you found the post interesting. Very happy to listen to your comments!

Deja un comentario