Past, present, and future of neuroscience

From In this very special episode of Unsupervised Thinking, we bring together a group of neuroscientists and neuroscience enthusiasts to have a semi-structured discussion on the past, present, and future of the field of neuroscience. The group includes your three regular hosts plus Yann, Alex, and Ryan (whose voice you may recall from our Deep Learning episode) and we each give our thoughts on what got us into neuroscience, what we feel the field is lacking, and where the field will be in 20 years. This leads us on a path of discussing statistics, emergence, religion, depression, behavior, engineering, society, and more!

Fairness in Machine Learning: Lessons from Political Philosophy

From Abstract: What does it mean for a machine learning model to be `fair', in terms which can be operationalised? Should fairness consist of ensuring everyone has an equal probability of obtaining some benefit, or should we aim instead to minimise the harms to the least advantaged? Can the relevant ideal be determined by reference to some alternative state of affairs in which a particular social pattern of discrimination does not exist? Various definitions proposed in recent literature make different assumptions about what terms like discrimination and fairness mean and how they can be defined in mathematical terms. Questions of discrimination, egalitarianism and justice are of significant interest to moral and political philosophers, who have expended significant efforts in formalising and defending these central concepts. It is therefore unsurprising that attempts to formalise `fairness' in machine learning contain echoes of these old philosophical debates. This paper draws on existing work in moral and political philosophy in order to elucidate emerging debates about fair machine learning.

PDF link.

Does computational complexity restrict artificial intelligence (AI) and machine learning?

From Can machines think? Philosophy and science have long explored this question. Throughout the 20th century, attempts were made to link this question to the latest discoveries -- Goedel's theorem, Quantum Mechanics, undecidability, computational complexity, cryptography etc. Starting in the 1980s, a long body of work led to the conclusion that many interesting approaches—even modest ones—towards achieving AI were computationally intractable, meaning NP-hard or similar. One could interpret this body of work as a "complexity argument against AI."

But in recent years, empirical discoveries have undermined this argument, as computational tasks hitherto considered intractable turn out to be easily solvable on very large-scale instances. Deep learning is perhaps the most famous example.

This talk revisits the above-mentioned complexity argument against AI and explains why it may not be an obstacle in reality. We survey methods used in recent years to design provably efficient (polynomial-time) algorithms for a host of intractable machine learning problems under realistic assumptions on the input. Some of these can be seen as algorithms to extract semantics or meaning out of data.

Down from the mountains

From At fourteen, Wang Ying doesn’t want to be a mother. But when not in school, she must take care of her younger brother and sister, do the chores and farmwork while also trying to keep up at school. The siblings are from the Yi ethnic group who live in mountainous Liangshan region in southwestern China. Their parents work in a factory over 1,000 miles away to earn money to give the children a better future.

The movement of economic migrants from the Chinese countryside to wealthier, urban areas has left around 9 million rural children like them alone or in the care of relatives. Evidence suggests such children are more likely to develop behavioural problems and drop out of school earlier than their peers. A documentary project follows the family in their two different worlds, and examines the dilemma faced by many rural parents who must choose between providing for their children economically or emotionally. It also highlights the challenges faced by some of China’s poorest and most marginalised people trying to keep apace with the country’s rapid development.

A longer version of this film was produced in collaboration with the Pulitzer Centre on Crisis Reporting and ChinaFile, a project of the Asia Society Centre on U.S.-China Relations.

LSD in Silicon Valley

From How California's tech entrepreneurs are turning to LSD for inspiration. Ed Butler speaks to George Burke, a Silicon Valley worker who takes small doses of the drug to help him work more productively - a practice called microdosing. He hears from Professor David Nutt at Imperial College London - one of the few people doing scientific research into LSD. And veteran Silicon Valley journalist Mike Malone explains why this is only the tip of the iceberg when it comes to tech firms and the limits of the mind and body.

Life, Interrupted

From What price do we pay for the constant interruptions we get from our phones and computers? And is there a better way to handle distraction? In this week's Radio Replay we bring you a favorite conversation with the computer scientist Cal Newport. Plus, Shankar gets electrodes strapped to his head to test a high-tech solution to interruptions.


Subscribe to Ricardo Martins RSS