“Learn to Code!” This imperative to program seems to be - TopicsExpress



          

“Learn to Code!” This imperative to program seems to be everywhere these days. Bill Gates and Mark Zuckerberg recently donated ten million dollars to Code.org, a non-profit that believes that “every student in every school should have the opportunity to learn computer programming,” and that “computer science should be a part of the core curriculum.” So-called “developer boot camps” are popping up everywhere. For second graders, recent college graduates, and people looking for a new career alike, the implication seems to be: Take an intensive course to learn to code and forget about everything else. In the May 29th issue of Newsweek, the tech columnist Kevin Maney argued that all this coding reeducation might soon be unnecessary. “Computers are about to get more brainlike and [soon] will understand us on our terms, not theirs,” he wrote. When that happens, “the very nature of programming will shift.” He projects that by 2030 we won’t program at all; we’ll simply tell our machines what we want them to do. Computer programming will be as essential as cursive handwriting. The Defense Department, for example, has an ambitious project called MUSE (Mining and Understanding Software Enclaves), which aims to “develop radically [new] approaches for automatically constructing and repairing complex software.” If the project succeeds, the project’s director told Maney, “someone who knows nothing about programming languages will be able to program a computer.” It’s a bold goal, but it remains to be seen whether the vision of the self-programming computer will become a reality anytime soon. We last heard promises like these seven years ago, when one of computer programming’s brightest lights, Charles Simonyi, who spearheaded Microsoft Office, left Microsoft to found a company called Intentional Programming. According to Technology Review, Simonyi’s goal was to develop a system that “would enable programmers to express their intentions without sinking in the mire of so-called implementation details that always threatened to swallow them.” Since then, almost no concrete results have been reported. Computer scientists have been dreaming of automatic computer programming for decades, but outside of very limited domains (like point-and-click interfaces that can be used to develop telecommunication networks) there has been very little sign of tangible progress so far. The impulse to simplify programming started in the early days, when computers were still room-sized mainframes made up of vacuum tubes. The advent of machine language, for instance, allowed computers to carry out new tasks by simply loading new programs, replacing the labor-intensive, error-prone process of physically rewiring and rearranging vacuum tubes and jumper cables. Early high-level languages, such as LISP, FORTRAN, and Cobol, developed in the late nineteen-fifties and early nineteen-sixties, allowed programmers to work with abstract constructs like loops and functions. (All of those early languages are still in use; a great deal of data analysis, for example, still relies on a set of linear-algebra routines that were written in FORTRAN.) Since those heady early days, though, there has been only evolution, not revolution. Dozens of incremental advances, like debuggers (which allow programmers to spot bugs more easily and more reliably) and libraries of freely available code have made coding easier but have not fundamentally changed coding’s core concepts. Today’s young programmers still think in terms of algorithms and data structures, just as their grandparents might have. What information will my program store in memory, and how? What steps will the computer take to transform inputs into outputs? All programming is, and remains, on the machine’s terms. What DARPA and Simonyi are hoping for is a complete paradigm shift. A programmer—and this could be anyone—would simply tell the computer what he needed in plain English, and the computer would figure out the rest. Anyone would be able to program, not just highly trained specialists, and, at least in principle, computers might ultimately produce much more reliable code than their human counterparts. This all sounds great. But before we reach the era of self-programming computers, three fundamental obstacles must be overcome. First, there is currently no method for describing what a piece of software should do that is both natural for people and usable by computers. Existing “formal specification languages” are way too complex for novices, and English itself is still way beyond machines. Programs like Siri have improved dramatically in recent years, and they can comprehend English in limited contexts. But they lack the precision required for building computer programs. When Siri hears “What Italian restaurants are around here?” she knows your location, and it’s fine that she only understands the words “Italian” and “restaurant.” But there is a world of difference between “Delete every file that has been copied” and “Copy every file that has been deleted.” For now, there is no reliable way to make a computer understand the difference between the two. The second problem is that good programs aren’t just montages of existing code. DARPA’s MUSE tries to solve the programming problem with an intelligent version of Web search that automatically mines the massive libraries of online code that are now freely available, hunting for code that might sensibly be stitched together. This might work in limited domains, but it is unlikely to work as a general solution. A good programmer doesnt just cut and paste snippets of code together (though that is part of the job); a good programmer understands, deeply, a problem that needs to be solved, and then creates an architecture for solving a problem thats never been solved before. It’s one thing to find relevant snippets, and another to ensure that they connect up right. The third obstacle is that computers still have too little understanding of how the external world works, and therefore too little understanding of how the programs they create will actually work. Consider, for example, this seemingly simple, hypothetical programming task: “Add a feature to Google Maps that allows a user to place a simulated boat on a river and have it float downstream.” To do this, you need to know what a river is, what a boat is, and what it means for a boat to float downstream. Any human programmer knows that, but no computer system has the real-world understanding of an average human being. As Tom Dean, a researcher at Google, told us, “Programming is [challenging for artificial intelligence] not because it requires concentration and attention to detail but because the path from the conception of what you want to accomplish to the realization of code that actually accomplishes it requires artistry, insight, and creativity as well as incredible mental dexterity.” One day computers may have that kind of dexterity and intuition; the DARPA program is a good first step in that direction. But the path to the automated, thinking computer will also require a shift in research priorities, from the currently popular focus on the question “What you can do with Big Data?” back to A.I.’s original, driving one: “How do you build machines that are broadly intelligent?” It’s certainly possible that machines may someday be able to program themselves, but in a generation in which even the nerdiest, most cloistered programmer in Silicon Valley continues to have a far better intuitive sense of the world than any computer does, that day still feels a long way away.
Posted on: Thu, 19 Jun 2014 22:18:15 +0000

Trending Topics



Recently Viewed Topics




© 2015