Tractor Hacking: The Farmers Breaking Big Tech's Repair Monopoly

From youtube.com: When it comes to repair, farmers have always been self reliant. But the modernization of tractors and other farm equipment over the past few decades has left most farmers in the dust thanks to diagnostic software that large manufacturers hold a monopoly over. In this episode of State of Repair, Motherboard goes to Nebraska to talk to the farmers and mechanics who are fighting large manufacturers like John Deere for the right to access the diagnostic software they need to repair their tractors.

Creative brains

From radio.seti.org: Your cat is smart, but its ability to choreograph a ballet or write computer code isn’t great. A lot of animals are industrious and clever, but humans are the only animal that is uniquely ingenious and creative. Neuroscientist David Eagleman and composer Anthony Brandt discuss how human creativity has reshaped the world. Find out what is going on in your brain when you write a novel, paint a watercolor, or build a whatchamacallit in your garage. But is Homo sapiens’ claim on creativity destined to be short-lived? Why both Eagleman and Brandt are prepared to step aside when artificial intelligence can do their jobs.

Optimal Transport Theory - New Frontiers in Mathematics - Cédric Villani

From youtube.com: New Frontiers in Mathematics: Imperial College London and CNRS international symposium Professor Villani from Université Claude Bernard (Lyon), discusses optimal transport theory, artificial intelligence and the journey and opportunities that a career in mathematics can offer.

Past, present, and future of neuroscience

From unsupervisedthinkingpodcast.blogspot.com: In this very special episode of Unsupervised Thinking, we bring together a group of neuroscientists and neuroscience enthusiasts to have a semi-structured discussion on the past, present, and future of the field of neuroscience. The group includes your three regular hosts plus Yann, Alex, and Ryan (whose voice you may recall from our Deep Learning episode) and we each give our thoughts on what got us into neuroscience, what we feel the field is lacking, and where the field will be in 20 years. This leads us on a path of discussing statistics, emergence, religion, depression, behavior, engineering, society, and more!

Fairness in Machine Learning: Lessons from Political Philosophy

From arxiv.org: Abstract: What does it mean for a machine learning model to be `fair', in terms which can be operationalised? Should fairness consist of ensuring everyone has an equal probability of obtaining some benefit, or should we aim instead to minimise the harms to the least advantaged? Can the relevant ideal be determined by reference to some alternative state of affairs in which a particular social pattern of discrimination does not exist? Various definitions proposed in recent literature make different assumptions about what terms like discrimination and fairness mean and how they can be defined in mathematical terms. Questions of discrimination, egalitarianism and justice are of significant interest to moral and political philosophers, who have expended significant efforts in formalising and defending these central concepts. It is therefore unsurprising that attempts to formalise `fairness' in machine learning contain echoes of these old philosophical debates. This paper draws on existing work in moral and political philosophy in order to elucidate emerging debates about fair machine learning.

PDF link.

Does computational complexity restrict artificial intelligence (AI) and machine learning?

From youtube.com: Can machines think? Philosophy and science have long explored this question. Throughout the 20th century, attempts were made to link this question to the latest discoveries -- Goedel's theorem, Quantum Mechanics, undecidability, computational complexity, cryptography etc. Starting in the 1980s, a long body of work led to the conclusion that many interesting approaches—even modest ones—towards achieving AI were computationally intractable, meaning NP-hard or similar. One could interpret this body of work as a "complexity argument against AI."

But in recent years, empirical discoveries have undermined this argument, as computational tasks hitherto considered intractable turn out to be easily solvable on very large-scale instances. Deep learning is perhaps the most famous example.

This talk revisits the above-mentioned complexity argument against AI and explains why it may not be an obstacle in reality. We survey methods used in recent years to design provably efficient (polynomial-time) algorithms for a host of intractable machine learning problems under realistic assumptions on the input. Some of these can be seen as algorithms to extract semantics or meaning out of data.

Down from the mountains

From vimeo.com: At fourteen, Wang Ying doesn’t want to be a mother. But when not in school, she must take care of her younger brother and sister, do the chores and farmwork while also trying to keep up at school. The siblings are from the Yi ethnic group who live in mountainous Liangshan region in southwestern China. Their parents work in a factory over 1,000 miles away to earn money to give the children a better future.

The movement of economic migrants from the Chinese countryside to wealthier, urban areas has left around 9 million rural children like them alone or in the care of relatives. Evidence suggests such children are more likely to develop behavioural problems and drop out of school earlier than their peers. A documentary project follows the family in their two different worlds, and examines the dilemma faced by many rural parents who must choose between providing for their children economically or emotionally. It also highlights the challenges faced by some of China’s poorest and most marginalised people trying to keep apace with the country’s rapid development.

A longer version of this film was produced in collaboration with the Pulitzer Centre on Crisis Reporting and ChinaFile, a project of the Asia Society Centre on U.S.-China Relations.

Tags: 

LSD in Silicon Valley

From bbc.co.uk: How California's tech entrepreneurs are turning to LSD for inspiration. Ed Butler speaks to George Burke, a Silicon Valley worker who takes small doses of the drug to help him work more productively - a practice called microdosing. He hears from Professor David Nutt at Imperial College London - one of the few people doing scientific research into LSD. And veteran Silicon Valley journalist Mike Malone explains why this is only the tip of the iceberg when it comes to tech firms and the limits of the mind and body.

Pages

Subscribe to Ricardo Martins RSS

Scholarly Lite is a free theme, contributed to the Drupal Community by More than Themes.