Not anytime soon. Photo courtesy Universal Studios.

Not anytime soon. Photo courtesy Universal Studios.

Artificial intelligence terrifies some people. There are concerns that AI will take over jobs and, eventually, take over society and throw humanity into a precisely dug smoldering pit.

Tech folks, for numerous reasons, don’t want you to think that their new products lay the smoldering-pit path. So to assess how AI will actually affect humans, Stanford started the One Hundred Year Study on AI, and released the first report this week. The paper assesses how AI will shape society in 2030, and the results should be calming to those fearing some form of digital apocalypse.

The gist is that we’ll see relatively small and handy additions to already existing services. There will be more and smarter robot vacuum cleaners. Medical devices and some diagnosis services will use AI. Security cameras and crime analysis could benefit from AI that learns about perps. Things like that.

Stanford’s report was instigated and is funded by Eric Horvitz of Microsoft Research. It dovetails with a just-created consortium of tech luminaries who are assessing how they should create AI that keeps society’s best interests in mind.

The gist is that the greatest potential of AI is to do things that suck for certain people: Fully autonomous trucks put truck drivers out of work. The same goes for factories, food service processes, etc.

Thus, representatives from Microsoft, Apple, Alphabet, IBM, and Facebook have been discussing what to do about AI. It’s an early dialogue — an AI-equipped vacuum isn’t yet a danger to society (as far as we know). But the mere existence of these groups, and of Stanford’s study, signifies just how disruptive the people creating AI think it could become.


Do You Really Need to Learn to Code?

The gist of software-as-a-service companies is to create programs that handle tasks humans (or some other expensive, tangible thingy) would otherwise perform. This Fast Company article tackles a very relevant question on that front: Does one need to learn to code if coders code programs that code for you?

Consider websites. A person used to have to be an HTML wizard to build a well-functioning website; today there’s WordPress and Squarespace and other programs that handle the coding mumbo-jumbo for customers. App development has followed a similar path; what’s to say complex tasks such as AI development don’t follow? Computer science has pretty consistently followed that path. Some whiz kid builds something, and then they build a program that builds said thing for them.

You can read Fast Company‘s story and say, well, this allows humanities majors to have an impact in computer science — cool. But consider today’s educational climate, particularly in places like Seattle. Pretty much everyone wants kids to be able to code, and gobs of money are being spent to expand computer science programs at universities. So, with the knowledge that we can create programs that handle coding for us, is this money well spent?

Computer science is a technically-focused education, especially if that education takes place at a trade school or junior college. So if programs increasingly handle coding tasks (note: there will always be demand for people who can create these programs), students may be better off studying, say, philosophy, so they can bring that perspective to computer science.


Elsewhere on the Web

The New Yorker has the best explanation thus far of why Ireland doesn’t want Apple to pay a $14 billion tax bill.

Boy, millennials are boring — let’s start making stereotypes about “Generation Z” instead.

Seriously, do we need that much cardboard and packing peanuts?

C’mon, Iran just wants a few 737s.

Photo by Steve Santamaria

Photo by Steve Santamaria

This post was written in virtual reality with an Oculus Rift and Bellevue-based Envelop VR’s platform. Buy our October issue to find out how it went.