How do you teach a car that a snowman won’t walk across the road?


From Picture yourself driving down a city street. You go around a curve, and suddenly see something in the middle of the road ahead. What should you do?

Of course, the answer depends on what that ‘something’ is. A torn paper bag, a lost shoe, or a tumbleweed? You can drive right over it without a second thought, but you’ll definitely swerve around a pile of broken glass. You’ll probably stop for a dog standing in the road but move straight into a flock of pigeons, knowing that the birds will fly out of the way. You might plough right through a pile of snow, but veer around a carefully constructed snowman. In short, you’ll quickly determine the actions that best fit the situation – what humans call having ‘common sense’.

Human drivers aren’t the only ones who need common sense; its lack in artificial intelligence (AI) systems will likely be the major obstacle to the wide deployment of fully autonomous cars. Even the best of today’s self-driving cars are challenged by the object-in-the-road problem. Perceiving ‘obstacles’ that no human would ever stop for, these vehicles are liable to slam on the brakes unexpectedly, catching other motorists off-guard. Rear-ending by human drivers is the most common accident involving self-driving cars.

Read full-text [ link]

Machine teaching

From Youtube: Machine learning is a powerful tool that enables computers to learn by observing the world, recognizing patterns and self-training via experience. Much like humans. But while machines perform well when they can extract knowledge from large amounts of labeled data, their learning outcomes remain vastly inferior to humans when data is limited. That’s why Dr. Patrice Simard, Distinguished Engineer and head of the Machine Teaching group at Microsoft, is using actual teachers to help machines learn, and enable them to extract knowledge from humans rather than just data.

Today, Dr. Simard tells us why he believes any task you can teach to a human, you should be able to teach to a machine; explains how machines can exploit the human ability to decompose and explain concepts to train ML models more efficiently and less expensively; and gives us an innovative vision of how, when a human teacher and a machine learning model work together in a real-time interactive process, domain experts can leverage the power of machine learning without machine learning expertise.

Troubling Trends in Machine Learning Scholarship - Zachary Lipton

From Youtube: The machine learning community is struggling to deal with several well-documented crises in scholarship: (i) a blurring of fact and fancy (ii) experiments divorced from falsifiability (iii) math that cannot, should not, and often isn’t meant to be followed, and (iv) exposition that sows confusion and distorts the public discourse. However, in other ways, the field is healthier than ever: (a) vibrant economy supports careers in machine learning, (b) mature tooling makes algorithms easier to run and experiments easier to reproduce, and (c) the field is far more welcoming and accessible to new talent. While, at an individual level, clear steps can be improve the quality of research and the resulting papers, what steps can be taken at a community-level is a far more challenging question. What levers can influence community practices? Who should pull them? And which interventions can curb flawed scholarship without undermining the community’s strengths? This talk will aim to present a balanced picture, both of the status quo, the ecosystem that supports it, and the difficulty of improving upon it.

Social Perception for Machines - Yaser Ajmal Sheikh

From Youtube: Despite decades of progress, machines remain intelligent tools rather than collaborative partners in individual human enterprise. A key reason is that machine perception of inter-personal communication is largely unsolved and a computationally accessible representation of such behavior remains elusive. In this talk, I will describe our research arc over the past decade at CMU to make human signaling a perceptible channel of information for machines. This research includes the construction of the Panoptic Studio, a multisensor facility designed to capture social behavior, and the development of Open Pose, a realtime 2D pose estimation approach whose demo you may have encountered on the fourth floor of NSH. I will share recent progress in moving from the lab to the real world and discuss futures in this research expedition.

Creating God

From Hidden Brain from NPR: If you've taken part in a religious service, have you ever stopped to think about how it all came to be? How did people become believers? Where did the rituals come from? And what purpose does it all serve? This week, we bring you a July 2018 episode with social psychologist Azim Shariff. He argues that we should consider religion from a Darwinian perspective, as an innovation that helped human societies to thrive and flourish.

Securing the vote - Josh Benaloh

From Youtube: If you’ve ever wondered why, in the age of the internet, we still don’t hold our elections online, you need to spend more time with Dr. Josh Benaloh, Senior Cryptographer at Microsoft Research in Redmond. Josh knows a lot about elections, and even more about homomorphic encryption, the mathematical foundation behind the end-to-end verifiable election systems that can dramatically improve election integrity today and perhaps move us toward wide-scale online voting in the future. Today, Dr. Benaloh gives us a brief but fascinating history of elections, explains how the trade-offs among privacy, security and verifiability make the relatively easy math of elections such a hard problem for the internet, and tells the story of how the University of Michigan fight song forced the cancellation of an internet voting pilot.

If you knew everything, could you predict anything? A thought experiment

From Youtube: In this Wireless Philosophy video, Richard Holton (M.I.T.) discusses the classic philosophical problem of free will --- that is, the question of whether we human beings decide things for ourselves, or are forced to go one way or another. He distinguishes between two different worries. One worry is that the laws of physics, plus facts about the past over which we have no control, determine what we will do, and that means we're not free. Another worry is that because the laws and the past determine what we'll do, someone smart enough could know what we would do ahead of time, so we can't be free. He says the second worry is much worse than the first, but argues that the second doesn't follow from the first.

MIT AI: Brains, Minds, and Machines - Tomaso Poggio

From Youtube: Tomaso Poggio is a professor at MIT and is the director of the Center for Brains, Minds, and Machines. Cited over 100,000 times, his work has had a profound impact on our understanding of the nature of intelligence, in both biological neural networks and artificial ones. He has been an advisor to many highly-impactful researchers and entrepreneurs in AI, including Demis Hassabis of DeepMind, Amnon Shashua of MobileEye, and Christof Koch of the Allen Institute for Brain Science. This conversation is part of the Artificial Intelligence podcast and the MIT course 6.S099: Artificial General Intelligence.


Subscribe to Ricardo Martins RSS