Algorithmic Decision-Making and Accountability

From Youtube: Jeff Larson, Safiya Noble, and Nikhyl Singhal join the Stanford teaching team, Rob Reich, Mehran Sahami, Jeremy Weinstein, and Hilary Cohen, to illuminate the ethical and social dimensions of algorithmic decision-making. They discuss competing notions of algorithmic fairness, the use of algorithms in practice (in both the public and private sectors), and questions of accountability, transparency, and governance.

If you knew everything, could you predict anything? A thought experiment

From Youtube: In this Wireless Philosophy video, Richard Holton (M.I.T.) discusses the classic philosophical problem of free will --- that is, the question of whether we human beings decide things for ourselves, or are forced to go one way or another. He distinguishes between two different worries. One worry is that the laws of physics, plus facts about the past over which we have no control, determine what we will do, and that means we're not free. Another worry is that because the laws and the past determine what we'll do, someone smart enough could know what we would do ahead of time, so we can't be free. He says the second worry is much worse than the first, but argues that the second doesn't follow from the first.

MIT AI: Brains, Minds, and Machines - Tomaso Poggio

From Youtube: Tomaso Poggio is a professor at MIT and is the director of the Center for Brains, Minds, and Machines. Cited over 100,000 times, his work has had a profound impact on our understanding of the nature of intelligence, in both biological neural networks and artificial ones. He has been an advisor to many highly-impactful researchers and entrepreneurs in AI, including Demis Hassabis of DeepMind, Amnon Shashua of MobileEye, and Christof Koch of the Allen Institute for Brain Science. This conversation is part of the Artificial Intelligence podcast and the MIT course 6.S099: Artificial General Intelligence.

Causal Effects and Overlap in High-dimensional or Sequential Data

From Youtube: Large data sources such as electronic medical records or insurance claims present opportunities to study causal effects of interventions that are difficult to evaluate through experiments. One example is the management of septic patients in the ICU. This typically involves performing several interventions in sequence, the choice of one depending on the outcome of others. Successfully evaluating the effect of these choices depends on strong assumptions, such as having adjusted for all confounding variables. While many argue that having high-dimensional data increases the likelihood of this assumption being true, it also introduces new challenges: the more variables we use for estimating effects, the less likely that patients who received different treatments are similar in all of them. In this talk, we will discuss the role of overlap in causal effect estimation through the lens of domain adaptation and off-policy reinforcement learning.

Making intelligence intelligible - Rich Caruana

From Youtube: In the world of machine learning, there’s been a notable trade-off between accuracy and intelligibility. Either the models are accurate but difficult to make sense of, or easy to understand but prone to error. That’s why Dr. Rich Caruana, Principal Researcher at Microsoft Research, has spent a good part of his career working to make the simple more accurate and the accurate more intelligible. Today, Dr. Caruana talks about how the rise of deep neural networks has made understanding machine predictions more difficult for humans, and discusses an interesting class of smaller, more interpretable models that may help to make the black box nature of machine learning more transparent.

Fight or flight: the veterans at war with PTSD

From Youtube: One hundred years on from the end of the first world war, a group of veterans in Dorset are torn between their pride in their military careers and their anger over the lack of psychological support provided to them by the Ministry of Defence. With many feeling abandoned and left to battle significant mental health issues such as PTSD alone, former soldier Andy Price decides to take matters into his own hands, launching the Veteran’s Hub, a peer-to-peer support network for veterans and their families. Over the course of a year, the Guardian's Richard Sprenger follows Andy on his journey.

What if He Falls?

From Youtube: In 2017, when Alex Honnold made his stunning free-solo ascent of Yosemite’s El Capitan, he was taking an unimaginable risk: nearly three thousand feet of climbing without any ropes or safety equipment. But was the climb made even riskier by the filmmakers who accompanied him? In “What if He Falls?” filmmakers Elizabeth Chai Vasarhelyi and Jimmy Chin take us inside the process of documenting Honnold’s quest for climbing glory — and the ethical calculus of filming a friend who could, with the slip of a finger, plummet to his death.

This Brain Implant Could Change Lives

From Youtube: It sounds like science fiction: a device that can reconnect a paralyzed person’s brain to his or her body. But that’s exactly what the experimental NeuroLife system does. Developed by Battelle and Ohio State University, NeuroLife uses a brain implant, an algorithm and an electrode sleeve to give paralysis patients back control of their limbs. For Ian Burkhart, NeuroLife’s first test subject, the implications could be life-changing.


Subscribe to Ricardo Martins RSS

Scholarly Lite is a free theme, contributed to the Drupal Community by More than Themes.