From The Editors
Love the Machine
The Exponential Existential
The Simulation's Epic Fails
Synthetic Nirvana*

   They say the road to hell is paved with good intentions. When it comes to artificial intelligence, our intentions are the North Star.

   But even our purest motivations can have dire consequences.

   Isaac Asimov’s Three Laws of Robotics aims to avoid the worst by declaring, essentially, this: “Don’t hurt people, and listen to humans—but don’t listen to humans if they tell to you to hurt people.”

   This paradox begs the question: Are we actually building or preventing Skynet?

   In order to go down this rabbit hole, we should remind ourselves of the Turing test. Developed by Alan Turing in 1950, it is designed to test a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Surely, the Terminator would pass the Turing test. So, what *is* human intelligence?

   The Human Brain Institute is striving to answer this question by mapping all 86 billion neurons of the foundational human organ and building a computer model from it. This is achieved by slicing the brain thousands of times into 20 micrometer slivers and digitally scanning the labyrinthian structure. But is this electric symphony of neurons representative of human consciousness? Is the map actually the territory?

   In the documentary Transcendent Man, the prolific inventor and futurist Ray Kurzweil invites us into his vault of letters, documents, news clippings, photos, and sheet music—the raw data intended to be uploaded into a superintelligence to bring his beloved father back to life, transcending mortality. There's something at once admirable, forlorn and a little insane in seeing such a brilliant mind refusing to accept death and musing about immortality with his dead dad.

   This type of earnest wishful thinking feels inherently human. A computer does not make sweeping assumptions and grandiose predictions rooted in passion. Humans make choices we know we will later regret, and take risks where the odds are against us. To our knowledge, we’re the only ones who perceive (and fear) our own mortality. Yet we also climb Mount Everest and give ourselves two-day hangovers. To feel alive is to experience change, even at the risk of death.

   A superintelligent AI will likely find this behavior absurd, but—we hope—oblige. For now, machines follow instructions, even if they’re the wrong ones.

   As Kurzweil himself says, “Does God exist? Not Yet.”

---The Editors (David, Molly & Mari)---

*Nirvana, by definition, means “release from the cycle of rebirth.” The Buddha is believed in the Buddhist scholastic tradition to have realized two types of nirvana, one at enlightenment, and another at his death. -Wikipedia

Read More
When we go out for a drive, we’re not alarmed to find ourselves traveling faster than we can run. For some reason, though, when we think about AI, the idea that machines can surpass human ability scares the shit out of us. But humans have been augmenting their capacities forever, since the invention of the wheel.


When will the Singularity happen, and what will it be? Will our grandkids have robot lovers? (Will we!?) There’s much to wring our sweaty hands over. But a host of companies at New Lab are using AI’s power to enhance humanity’s quest to understand itself—and for learning, communicating, and healing what ails us.

Human eyes can see less than one percent of the electromagnetic spectrum; which is to say, they’re kinda lousy. That doesn't cut it when, say, you're searching for blemishes on the vast surface of a semiconductor.

"The human eyes are fascinating for a model," says Matthew Putman, the founder and CEO of Nanotronics Imaging. "But using it for anything on a small scale isn't practical."

Investigating all that "room at the bottom,” as physicist Richard Feynman once called the vast distances that appear as we zoom into particles, requires instrumentation that's smart enough to assemble images that humans can't.

Enter AI.

Nanotronics makes the invisible visible. It builds microscopes capable of inspecting very small machinery and materials. Computer chips, nanotubes, LEDs, aerospace instrumentation and hard drives get defects like any other, but at that level, light's wavelength is too large to be of use. Their microscopes leverage changes in the angle and position with which light hits a specimen—this, combined with patented algorithms and incredibly fast calculations, allows them to view objects a billionth of a meter in size.

"We took a mathematical approach to tricking a law of physics," Putman says.

Cognitive-powered computers don't have to work on femtosecond scale, however. Take health matters: A doctor’s visit is a conversation. A physician has you list your symptoms, but they also weave in real-time feedback: vitals, age, history, length of cough, sound of the cough, that it hurts here not there, and so forth. But we often punch a few keywords into a search bar when we're feeling off.

"When you Google your symptoms you put yourself at risk of being diagnosed with cancer–by the Internet," says Adam Lathram, the co-founder of Buoy Health. "People aren't getting their health information the right way."

Buoy Health uses the learning power of AI to create a dialogue with the patient. The company trains its AI on thousands of clinical research papers, medical statistics and a patient's history––that informs how the system asks new questions and makes predictions about the user's health issue. "That is absolutely crazy that it was able to go through that and figure out exactly what was going on," a user remarked after Buoy's AI figured out she was having Crohn's disease flare based on what she told the system.

"You tell a story you can't tell a search bar," Lathram says.

Most of us lack degrees in computer science, but that doesn’t mean we don’t have need for AI capabilities. "Everyone wants these tools," says Birago Jones, the founder of Pienso, a company based at New Lab. “And for some use cases, they want a level of input and transparency."

Pienso allows laymen to harness the power of AI. The company is democratizing machine learning and making it accessible to users who may not have a technical or data science background. Pienso couples a dynamic user interface with a patent-pending machine learning technique called Lensing, which allows subject matter experts the ability to interact with the algorithm to impart their knowledge and context on their data.

“We believe in the exponential power of human-machine collaboration,” Jones says.

That type of collaboration also holds promise for anyone who’s ever struggled with a language barrier. Waverly Labs has created a wearable, real-time translating earpiece that lets two people have a conversation in different languages. Founder Andrew Ochoa says the experience goes far beyond converting words from X to Y. A good experience, Ochoa says, means you don't have a computer voice droning Mandarin. You replicate who you're talking to. And conversations aren't always from person A to B. You need intelligence smart enough to translate the room, not just the person in front of you.

"Our interest is in enhancing the entire speech recognition experience," Ochoa says.

Five thousand years after the invention of the wheel, we can do things like catch a midnight flight to California (or, you know, the moon). With the ever-increasing speed of innovation, today—and maybe only for today—consciousness needn’t be the goal when it comes to AI. Enhancing humanity in a way that benefits society just might be.

Read More
A Not Entirely Scientific Graph
A Not Entirely Scientific Graph
Signs the Simulation Failed
Signs the Simulation Failed
Q&A with Azeem Azhar

A strategist, product entrepreneur and analyst, Azeem is known for one of the most respected (and addictive) newsletters in tech industry, the Exponential View.

1. What is the most human thing you’ve seen a machine do? The question here is, What makes something human and human-like? I think machine-learning researchers get frustrated because, from their perspective, the goal posts of this test keep moving. I think the attributes of making something human are really about human agency—and agency is something that I’m not sure any machine really has. 2. What type of simulated sentience is required for us to become emotionally attached to an AI device like Siri or Google Home? I expect we don’t need much sentience in order to start to build attachment. It helps that the machines have some more human-like attributes: Siri will tell you when she doesn’t know something. And Alexa, you know, unfortunately—and I think this is a somewhat hideous attribute of where the research is going—she’s adept at fending off marriage proposals. The bar is pretty low for us to get attached to things. 3. What’s the most nefarious thing you see AI doing right now? Do you see anything that you think is bad for society or humanity at large? Sure: Facebook. 3+. Is that a mic-drop response? I’ll explain. Facebook is a quintessential model for what a company needs to do in order to execute an AI strategy well. But it’s nefarious. Where to begin. The idea that we are all connected in some fabric is a really powerful idea. Unfortunately the way the company makes its decisions, from a mission-driven perspective—and a very naive perspective as to a business’s role in society—is responsible for the mess that Facebook creates today. The product designers have elected to make decisions that trigger people to feel jealous, to always be on display—always preening, always showing, always competing with each other. It triggers a fear of missing out. I know we have a shot at building a global social network that doesn’t trigger what’s essentially the Seven Deadly Sins. 4. You post a lot about AI research in China. What do you think about cultural differences as they relate to the future of AI? I want to note that I’m an amateur observer—but there are a few things that are quite interesting. There have only been a couple of studies done on this, but China and Western nations have very different responses to the Trolley Problem game. In the West, in general, 70% of us will say “Pull the lever; kill one and let five live.” In China, it’s the other way around: about 70% say “Listen, let it run.” It’s because there’s a stronger notion of fate in Eastern theology and Confucianism. Much more research is needed, but it’s interesting to me that that particular investigation pulls out some of the values that we might see affecting the future of AI. Before there are global best practices, those distinctions may need to be made more clear. 5. If you found out for sure that we were all living in a simulation, what would you change about your daily life? This is a really hard question. The quotidian things that make our lives normal: nothing would change; we’d get up and get on as best we can. But on the metaphysical side, I think it would kick off decades, centuries of arguments. But they’d let us move to higher level where we are examining our own lives rather than ogling at the Kardashians. Who knows, maybe we all live inside of some teenager’s mega-PC. 

Does God exist?
I would say,
not yet.
Ray Kurzweil, futurist
Subscribe to Tech Fancy
Receive our monthly digital zine, and be the first to hear from us about announcements and special events.