Fairness in Machine Learning: Lessons from Political Philosophy

From arxiv.org: Abstract: What does it mean for a machine learning model to be `fair', in terms which can be operationalised? Should fairness consist of ensuring everyone has an equal probability of obtaining some benefit, or should we aim instead to minimise the harms to the least advantaged? Can the relevant ideal be determined by reference to some alternative state of affairs in which a particular social pattern of discrimination does not exist? Various definitions proposed in recent literature make different assumptions about what terms like discrimination and fairness mean and how they can be defined in mathematical terms. Questions of discrimination, egalitarianism and justice are of significant interest to moral and political philosophers, who have expended significant efforts in formalising and defending these central concepts. It is therefore unsurprising that attempts to formalise `fairness' in machine learning contain echoes of these old philosophical debates. This paper draws on existing work in moral and political philosophy in order to elucidate emerging debates about fair machine learning.

PDF link.

Does computational complexity restrict artificial intelligence (AI) and machine learning?

From youtube.com: Can machines think? Philosophy and science have long explored this question. Throughout the 20th century, attempts were made to link this question to the latest discoveries -- Goedel's theorem, Quantum Mechanics, undecidability, computational complexity, cryptography etc. Starting in the 1980s, a long body of work led to the conclusion that many interesting approaches—even modest ones—towards achieving AI were computationally intractable, meaning NP-hard or similar. One could interpret this body of work as a "complexity argument against AI."

But in recent years, empirical discoveries have undermined this argument, as computational tasks hitherto considered intractable turn out to be easily solvable on very large-scale instances. Deep learning is perhaps the most famous example.

This talk revisits the above-mentioned complexity argument against AI and explains why it may not be an obstacle in reality. We survey methods used in recent years to design provably efficient (polynomial-time) algorithms for a host of intractable machine learning problems under realistic assumptions on the input. Some of these can be seen as algorithms to extract semantics or meaning out of data.

Down from the mountains

From vimeo.com: At fourteen, Wang Ying doesn’t want to be a mother. But when not in school, she must take care of her younger brother and sister, do the chores and farmwork while also trying to keep up at school. The siblings are from the Yi ethnic group who live in mountainous Liangshan region in southwestern China. Their parents work in a factory over 1,000 miles away to earn money to give the children a better future.

The movement of economic migrants from the Chinese countryside to wealthier, urban areas has left around 9 million rural children like them alone or in the care of relatives. Evidence suggests such children are more likely to develop behavioural problems and drop out of school earlier than their peers. A documentary project follows the family in their two different worlds, and examines the dilemma faced by many rural parents who must choose between providing for their children economically or emotionally. It also highlights the challenges faced by some of China’s poorest and most marginalised people trying to keep apace with the country’s rapid development.

A longer version of this film was produced in collaboration with the Pulitzer Centre on Crisis Reporting and ChinaFile, a project of the Asia Society Centre on U.S.-China Relations.

Tags: 

LSD in Silicon Valley

From bbc.co.uk: How California's tech entrepreneurs are turning to LSD for inspiration. Ed Butler speaks to George Burke, a Silicon Valley worker who takes small doses of the drug to help him work more productively - a practice called microdosing. He hears from Professor David Nutt at Imperial College London - one of the few people doing scientific research into LSD. And veteran Silicon Valley journalist Mike Malone explains why this is only the tip of the iceberg when it comes to tech firms and the limits of the mind and body.

Life, Interrupted

From npr.org/planetmoney: What price do we pay for the constant interruptions we get from our phones and computers? And is there a better way to handle distraction? In this week's Radio Replay we bring you a favorite conversation with the computer scientist Cal Newport. Plus, Shankar gets electrodes strapped to his head to test a high-tech solution to interruptions.

The future of humanity and technology - Stephen Fry

From youtube.com: Stephen Fry, actor, comedian, journalist, author, tech enthusiast and polymath delivered his Shannon lecture "The future of humanity and technology". With over 150 film, TV, and audio performances and over 20 written works, as well as over 12 million Twitter followers, Fry’s wit and wisdom have been read, seen or heard around the globe over multiple generations.

Fry explores the impact on humanity of emergent technologies and, in classic Bell Labs style, looks back at human history to understand the present and the future. He will outline how humans have adapted to revolutionary changes in all aspects of life over the past millennia, and uses this as a basis for conjecture about the future of human existence in the machine or industrial internet age, and how best to navigate these murky technological and societal waters.

Pages

Subscribe to Ricardo Martins RSS

Scholarly Lite is a free theme, contributed to the Drupal Community by More than Themes.