This is true in some organizations, but in other places the difference between "technician" and "engineer" literally is an engineering degree. I currently work at such an organization: you can't have the title "engineer" unless you have an engineering degree, technicians cannot rise above a certain level in the organization (which effectively puts a ceiling on their pay, as well), and technicians cannot be placed in positions of authority over engineers. A lot of this is stupid and makes no sense: I know "technicians" who are doing the same work as "engineers," better than many of the engineers they work with, but they are paid less and work under less experienced team leads, all because they lack a degree. Unfortunately, I'm sure there are plenty of other places that do things the same way.
Thinking about some of the people I've known who have worked in these roles, I wonder if the dichotomy between engineer and technician can be better described as theoretical vs. practical and design vs. maintenance.
Also, some engineers only have 2 year degrees, the same as many technicians. But the content of the education is different: very broadly speaking, breaking along theoretical vs. practical lines, with some crossover.
>Interesting to note that computer/math degree holders don't fare far better than recent humanities and liberal arts graduates. Journalism majors perform even better.
...if you only look at the unemployment numbers. Scroll down and look at the earnings numbers and you will see a very different picture.
I would be very interested to see the standard deviations on these unemployment and earnings numbers. I'd be willing to bet that they are much larger for humanities/liberal arts/journalism majors than for the engineering/computer science/math majors.
A liberal arts degree is very much what you make of it: if you are smart, motivated, and disciplined, you can get an incredible education and emerge with a much sharper mind. On the other hand, if you are lazy, stupid, and/or unmotivated, you can carefully pick classes and profs that require minimal effort to get by, and graduate without developing yourself in any meaningful way. I went through college with both types of people. I knew one guy who chose English as his major because he thought it would be the easiest major. He was right, mostly because he made it true: every semester when it was time to sign up for classes in the next semester, he would build a matrix of all of the books on the reading lists of all of the available classes, and then figure out which combination of classes had the greatest amount of overlap. This saved him a lot of money on books, a lot of time on reading, and probably some effort on writing. At the end of four years he had a degree and a job, and that's all he really cared about. If he had put half as much effort into work as he put into avoiding work, he could have had himself a hell of an education.
A technical degree, on the other hand, doesn't allow for such a wide range of effort inputs: if you put in minimal effort you won't just waste your time, you will fail. I saw this happen as well. More commonly, I saw people realize, "Holy shit, this stuff is serious, I can't just cruise!" and switch majors to something where they could get away with minimal effort.
As Jtsummers pointed out, humanities/liberal arts majors seem to enter into a wider range of employment fields, including (as ghurlman pointed out) software. As yummyfajitas pointed out, ability bias accounts for a large portion of the college wage premium. However, over the last few decades the percent of people going to college has increased dramatically. Some of those new people most likely have suitable levels of ability to attend college, but a lot of them almost certainly do not. Those who do not either fail out or gravitate to programs that will allow them to graduate despite the fact that they are not getting a true college education, like the English major I mentioned above. I think that these people are pulling down the numbers for such majors. If you were somehow able to sort out the "serious" liberal arts graduates from the "slackers," I think that you would find that those individuals who put in the effort to truly benefit fromt their liberal arts educations have unemployment and earnings numbers at least comperable to people with technical degrees.
He's still wrong: human error is a factor in the overwhelming majority of aviation mishaps (I don't have stats handy, but it's something in the neighborhood of 85% IIRC).
This must be understood in the context that while it is almost always a factor, it is rarely the only factor. Aviation mishaps typically occur when three or more factors combine. For example, if you're just low on gas, or just in bad weather, or just a little bit tired, you probably won't have a mishap, but put all three together and things can rapidly get out of hand. In the case of AF 447, they were in bad weather (which caused a malfunction in the FCS), they had CRM problems (Captain was not in the cockpit, and the two pilots who were in the cockpit did not coordinate well), there were serious flaws in the HMI design (the most egregious example, in my mind, being the averaging of the stick inputs), and the crew was not sufficiently trained on how to respond to FCS failures.
Under current tax laws they would most likely only do this in states where they already have shipping centers (Nevada and ...?) because opening such a store would create a nexus, requiring them to collect use taxes on all of their online sales to the state where the store is located.
On the other hand, the problem of collecting those taxes when the customer ordered in the show-room is vastly easier than the burden they would face trying to collect taxes from online shoppers. Online, they would have to figure out where the order is from (not hard with browser location these days), and then have a way to figure all the taxes due from various types of transactions at any point on the map. With a show-room, they can pre-figure the rates for just the show-room locations like every other store does. It isn't a "have a solution for any arbitrary point on the map" problem anymore, thus is much more tractable and less expensive to solve.
Of course, the smart shopper then puts together a wishlist in the store, and clicks buy on their phone out in the parking lot.
>The same is not true for many other types of goods, such as clothes. Shops are performing a useful function in letting consumers see, touch and try what they buy.
I used to feel the same way, but now I feel that this experience is highly overrated. If I could get away with buying all of my clothes on Zappos and Amazon (especially Zappos!), I would. They have plenty of pictures so that I know exactly what the clothes look like. Reviews from other users tell me more about the quality than I would be able to discern by simply handling the clothing in a store (e.g. how well it holds up to wear and tear). If an article of clothing doesn't quite fit right or if the color is a little off from how it appeared online, free two-way shipping solves my problem quickly and easily. Even in the cases where I have to exchange an item because of size or color problems, the total experience still requires way less time and effort than a single trip to a brick-and-mortar store.
The only reason why I think that brick-and-mortar clothing stores might survive against sites like Zappos is that most of the women I know (especially my wife) really enjoy the experience of shopping for clothes. Then again, I used to enjoy browsing electronics stores the same way, but I sure don't miss it anymore. Also, the aspect of clothes shopping that my wife seems to enjoy most is "getting deals:" using complicated combinations of sales, specials, and coupons to knock the price down as far as possible (in other words, to pay a price I would consider sane). It's not the only thing she enjoys about clothes shopping, but it's a very big part. If online clothes prices were to get sufficiently low, they would become such a "great deal" that I think she would be unable to resist. This process has already begun: she buys most of our daughters' clothes, and an increasing percentage of her own, online now.
And this is destroying your local retail market. Eventually, once all of the mom and pop shops are pushed out, we will be left with nothing but big box places that do their best to bone the consumer at every turn.
It is simply not possible to eek out anything more than a lower-middle class living owning a retail store any more. Honestly, it is pretty sad to watch. There are a lot of small, local businesses getting hammered away by these retail giants and are almost powerless to stop it.
Automation is incapable of "judgement;" automation consistently and reliably responds to inputs according to pre-defined instructions. That's why automation works really well in some situations but not others. The more complex the task, the harder it is make a complete instruction set that will result in a satisfactory outcome for every possible situation.
Even human pilots behave somewhat like automatons in some situations: in most situations they follow procedures, which could be described as "responding to inputs according to pre-defined instructions." However, they often encounter situations not covered by the procedures, in which case they must instead exercise their judgement.
The problem with judgement is that it is neither consistent nor reliable. Some humans have better judgement than others. Eventually we will have automation sophisticated enough to handle even to full complexity of aviation, at which point automation will yield safe results more consistently and reliably than human judgement.
I suppose it gets into philosophical debates about AI, but I don't see any reason in principle that we can't describe at least some kinds of computerized systems as exercising something described as "judgment". We already have one existence proof, the human brain, of a system that can exercise something we call "judgment", and I don't see a strong reason to believe that it's due to anything magical about the human brain in particular (like a soul or something along those lines), rather than just being a complex reasoning system that's able to balance many contextual factors.
Automated systems as they currently exist have fixed responses to fixed inputs. If an automatic system encounters a set of inputs it wasn't programmed for, it has no capacity to determine a best course of action for those inputs. Depending on how it was programmed, it will either keep doing what it was doing previously, or switch to a pre-programmed "contingency plan" that hopefully will result in a tolerable outcome (but which might result in catastrophe), or possibly execute a random set of instructions (which can happen to poorly-designed state machines, for example).
A human being, on the other hand, has the capacity, when faced with unexpected or unfamiliar conditions, to exercise something we call "judgement" in an attempt to develop an appropriate response. I'm not saying that it's impossible for an automated system to have this capacity, I'm saying that no current automated systems have it, and that we're nowhere near to developing such a system any time soon.
That's definitely the case with deployed civilian aircraft systems, but I'd be surprised if there wasn't some unmanned system somewhere doing more complex reasoning. There was a talk years ago at IJCAI from some people from NASA Ames on a prototype aircraft-control system they'd built that used a reasoning system to assist with performing emergency landings in situations with no preprogrammed contingency, by taking into account some telemetry information (e.g. aircraft damage), map information, an aerodynamic model, and risk models.
I do believe they were planning to deploy it as a suggestion system though, which would suggest a course of action, and then leave it to the pilot to implement it or not. Then the judgment gets more murky; now the system is doing some of the judgment (evaluation of alternatives, etc.) that a human pilot would normally do, but leaving some of the judgment (accept the suggestion, modify it) still to the human.
That's not really true. It's often the case that automation is designed reach a specific goal and it then try's to achieve / maintain that state. AKA Segways try to balance and an F-15 Control Augmentation System (CAS) aka stability assistance system try's to keep flying even without a wing. (Yes, this worked and was not programmed for.)
Plenty of other planes of lost a section of wing a wing and still landed. http://www.airliners.net/aviation-forums/general_aviation/re... Granted, all of these cases had a pilot, but in the F-15 the avionics actually discovered how to maintain level flight after the loss of the wing.
I have a master's in aeronautical engineering, and one of my major areas of study was stability and control, which is what your Segway and F-15 examples fall under.
Automated flight control systems definitely do not exercise "judgement:" they have inputs, a transfer function (typically MIMO, these days), and outputs. It used to be that the transfer function was fixed, but more sophisticated systems (e.g. fighter jets) often have many different transfer functions and switch between them based upon various inputs. They don't "decide" or "discover" anything: for any given set of inputs, they will predictably produce a pre-determined set of outputs.
Aircraft stability and control systems are not programmed to care about, or even know about, the existance of the wings. The closest they get to this is that they will know the current states of the control surfaces on the wings. So it doesn't really make sense to say that the F-15 CAS was "not programmed for" the state of missing a wing (although it almost certainly was programmed to respond properly in the situation where it gets no feedback from some of the flight controls). As you say, it's designed to reach a specific goal (keep the plane level) and then maintain that state. If the system detects an uncommanded roll-rate, it will move the flight controls to stop that roll-rate. It doesn't know or care that the uncommanded roll-rate is the result of asymmetric lift due to an (almost completely) missing wing: it's just going to keep moving the flight control surfaces until that roll rate goes away. If the aicraft had been damaged in a slightly different way, it's possible that the CAS would have issued commands to the flight controls that would have departed the plane, but fortunately the handling characteristics of the aircraft remained close enough to normal that the control laws still produced good results.
Unfortunately, this behavior can result in mishaps when a flight control system gets erroneous inputs: when it believes that it is rolling when it is wings-level (or believies it is level when it is rolling). This was a major contributor to the Air France fligh 447 crash. In such situations, it takes judgement to realize that something is wrong and to figure out what to do about it.
Ok, it was my understanding that the F-15 adjusted the transfer function based on a feedback cycle rather than simply picking from a list of them. However, thinking back the conversation was ambiguous and I don't have the clearance required to find out the correct answer.
However, while flight control systems have been responsible for plenty of crashes pilots have often mistaken level for non level flight and focused on faulty instruments rather than switching to working backups etc. An automated system can handle redundancy much more efficiently than people in such situations so while it's little value for a person to have ex:7 gyroscopes if they need to pick between them autopilots can gain from access to such information.
I never claimed that human pilots are superior to automated flight control systems in every aspect. In fact, I have stated explicitly that computers perform some tasks better and more safely than humans.
I was simply making the point that when you encounter a situation that isn't covered in the instructions, you need juman judgement to figure out the best way to proceed. I also explicitly stated that human judgement is far from flawless, and that if you can develop a sufficiently comprehensive automation program, you can get safer results than you would on average with human judgement.
If you feed completely new information into a system it's going to do something. Some times it's even the correct choice, but really people also do the same thing in novel situations. I have no problem calling judgement simply deciding what to do based on the current situation and as soon as you add any form of adaptation then computers can do that. But, I am also willing to concede your probably using a different definition.
PS: IMO, what separates people from machine learning systems is treating everything as training data, a much larger training set, a lot more processing power, and a tendency to explore novel situations. The trade off is efficiency and reaction times. Still, when you get into thrust vectoring, super sonic flight, high g turns, rapidly changing weight/drag/thrust at the same time trading consistency for improved handling of novel situations is probably worth it so I expect the air force uses systems that are fare more adaptable than the civilian world.
Before claiming a machine is incapable of judgment, we would have to define what it is. Is a fly capable of judgment? A fish? This is a philosophical question and won't yield any meaningful answers. Call it judgment of feedback loop, the end result is similar.
There are actually a few levels of automation available to hornet pilots, ranging from (in layman's terms) "a little help staying on-speed and on glideslope" to "fully automatic." Some pilots hardly ever use them, some pilots use them as much as they are allowed (in order to insure proficiency, there are limits in place to make sure the pilots don't become overly reliant on the automation). Most pilots are in-between, using them occasionally, typically when they feel like they're not at their best (e.g. at the end of a 6 hr+ combat sortie into Afghanistan). Even when on full-auto, I don't know any of them who sit "hands folded:" they're ready to take over immediately in case something goes wrong. I've heard stories that some of the test pilots eventually got confortable to take their hands completely off the controls back when this stuff was first being tested, but I doubt if even they made a habit of it.
Occasionally it will disengage, leaving the pilot in full control. IIRC, when in full-auto it will automatically wave off (add power and climb out to go around and try again) when it disengages unless the pilot actively takes over; so it wouldn't create an immediately unsafe situation, but it could result in an unneccessary missed opportunity to land.
I fly for a living, and I currently work in developmental flight test. One of the programs I am working on is a UAV.
It's going to be a very, very long time before we see autonomous airliners. I'll talk about specific technical hurdles, but I think the biggest issue is psychological: it's one thing to entrust a bunch of freight to an autonomous vehicle, another thing entirely to entrust dozens of living, breathing human beings to such (the article discusses this, including the concept of "shared fate"). I am confident that autonomous airliners will only come into service when autonomous aircraft technology reaches a point where the computers are able to handle every aspect of flight safety better than humans. Right now they can already do some of those things better than humans, and those tasks have already largely been shed by human pilots and entrusted to their computers. I think there will be a gradual transition as the computers are able to take on more and more of the tasks. The article also mentions how this process is already causing basic aviation skills to atrophy in pilots who leave too much to the computers. I think this is a very real problem, and I think it was a major contributor to the Air France flight 447 crash. Prudent pilots do more manual flying than is strictly necessary because that's the only way to maintain proficiency. If this problem becomes severe enough, expect to see the FAA establish more granular proficiency requirements.
The author talks a lot about how much the military is using UAVs, and seems to think that this is a good model for civilian applications. It isn't. The military has an entirely different set of risk considerations than civil aviation. Take the example of medivac UAVs: a medivac UAV will almost certainly have a higher mishap rate than a manned medivac helo, which would be completely unacceptable for civilian purposes. However, for the military that increased mishap risk is more than offset by the risks of putting an entire human crew into harm's way just to medivac a single wounded soldier.
Military UAVs currently in use generally have much higher mishap rates than their manned counterparts, but the military tolerates this because aicrew don't die in UAV mishaps, and UAVs are generally less expensive to replace than manned vehicles. Part of the reason for this is that features designed to prevent or mitigate mishaps cost money, and it is often cheaper to leave many of them out and accept the higher mishap rate, especially when no human crew is involved. However, part of it is that autonomous systems still just aren't as good at flying safely. For the military, the benefits outweigh the costs, but I really can't see a for-profit corporation reachig the same conclusion.
>Northrop Grumman has built some sense-and-avoid savvy into the unmanned helicopters and other UAVs it's developing for the U.S. Navy.
I happen to be intimitely familiar with one such system, and somewhat familiar with another. This sentence is utter BS. I can only assume that the author was fed a line by an NGC PR-type and took it at face value. A more honest way to say it would be:
Northrop Grumman is trying to incorporate limited sense-and-avoid capabilities into the unmanned helicopters and other UAVs it's developing for the U.S. Navy.
Moreover, FAA has a requirement for "see and avoid," not "sense and avoid." The military is trying hard to sell them on the idea that it should be "sense and avoid," of which "see and avoid" would be just a subset, but so far the FAA has remained deeply skeptical. The FAA is right to be conservative about this change: none of the currently proposed systems would be as effective at collision avoidance as the Mark I Eyeball, and thus far the systems I am aware of (to which the quote from the article was referring) are still a very long way from working properly. This seems to me like the kind of technical problem that eventually can be overcome, but it's going to take a lot of work to make that happen.
Because "see and avoid" is so critical to safety of flight, and because UAVs can't currently do it, the FAA does not allow UAVs to operate in its airspace, with some tightly restricted exceptions for military and law enforcement UAVs. I doubt very highly if they would make similar exceptions for civilian purposes, and even if they did, the restrictions involved are so limiting that there aren't many viable applications.
In just a couple of years the DARPA challenge yielded cars that can drive themselves in complex urban environments.
I imagine with sufficient smart people working on it, flying airplanes in relatively uncluttered sky would yield results faster.
As for see-and-avoid, perhaps it works for obstacles on the ground, but other aircraft are moving so fast, is it really feasible to eyeball them before they are on you? Perhaps in pursuit, but at any significant angle they flash past at hundreds of miles an hour. Only radar etc has a chance of identifying/avoiding at those speeds.
<Edit: spelling>
Says the human. To a computer driving is much more difficult than flying.
You could equally say adding up big numbers is much harder for you than walking across the street. To a computer the former is trivial, the latter no walking robot has yet done.
As I've mentioned elsewhere, I'm directly involved in the test and evaluation of one of the current attempts at "sense and avoid" technology. It is a long way from being safe for fully autonomous use. It is my understanding that the DARPA autonomous land vehicle contest required autonomous collision avoidance, and that more than one of the entrants did so effectively. Their budgets were a pittance compared to what is being spent on "sense and avoid" for UAVs, and yet they had much greater success. That tells me that urban traffic avoidance is easier for computers than air traffic avoidance.
I think that two big reasons for this are probably:
1: For land vehicles, simply stopping in place is almost always an effective collision avoidance tactic (unless the other vehicle is deliberately seeking a collision). This simple solution is not available to airplanes.
2: Tracking objects thar are moving in three dimensions with a sensor that is also moving in three dimensions is an immensely more complex problem than tracking objects that are constrained to move on a fixed surface in two dimensions with a sensor that is also constrained to move on a fixed surface in two dimensions.
Circular reasoning? They solved the land-nav problem because its easier. Its easier because they solved it.
My point was, there is another wrinkle. The land-nav problem was opened up, made a competition with a big marketing budget. It was also intractable, unsolved, too hard. Until lots of smart people started brainstorming and trying crazy things and cooperating.
Airplanes can change speed drastically, which is at high speeds about as effective as stopping. And no, you don't get to say collisions are hard to avoid because 3 dimensions are hard to calculate, making that not a solution.
I think I begin to see why the problem hasn't been solved yet.
It's not circular reasoning: it's a conclusion based on observation. Several groups tried to solve each problem. All of the groups that tried to solve problem A have thus far failed (although some have made measurable progress) while some of the groups that tried to solve problem B succeeded despite having considerably less resources at their disposal. "Hard to solve" can be a somewhat difficult label to define, but those results are a strong indicator that problem A is harder to solve.
By your logic, every "impossible" problem could be solved easily if just DARPA would offer a small prize to whoever solves it. Unfortunately, the real world doesn't work that way. There's a reason why DARPA chooses the tasks they do for their challenges: they spend a lot of time and effort identifying tasks that are highly likely to be amenable to novel solutions.
>Airplanes can change speed drastically, which is at high speeds about as effective as stopping.
Incorrect, on two counts. First, not all airplanes can change speed "drastically." Second, it is not as effective at preventing a collision as stopping. If both cars in an impending collision stop (and in many cases, even if only one of them stops), a collision becomes impossible. On the other hand, there are a lot of situations where deceleration merely delays, but does not prevent collision. That has value, but it's not as good.
>And no, you don't get to say collisions are hard to avoid because 3 dimensions are hard to calculate, making that not a solution.
I never said it was "not a solution," but I definitely do get to say that it's a much harder problem to solve. Here's the steps you have to perform to avoid a collision:
1: Detect an object.
2: Track the object to determine it's course and speed.
3: Compare the object's course and speed to yours to determine how likely a collision is.
4: If the probability of a collision is unacceptably high, determine a change of course and/or speed which will reduce the probability of collision to an acceptable level. If the probability of collision is already acceptably low, return to step 2.
5: Maneuver to change course and speed accordingly, then return to step 2.
If you are moving in three dimensions and the objects with which you might collide are moving in three dimensions, step 2 is hard to do accurately. (Unless you've studied radar tracking, you probably don't appreciate exactly how hard, but if you're genuinely interested, Skolnik's Radar Handbook is a good place to start.) The less accurate you are at step 2, the harder steps 3, 4, and 5 become, because you have to deal with more uncertainty. Is that really where the other object is? Is that really where it will be in twenty seconds? How certain are you of that? How certain are you that there really is something even there at all? If you're wrong in one direction, you'll have a midair; if you're wrong in the other direction, you'll perform some extreme maneuver for absolutely no good reason.
I've seen government contracting many times before. You put out rfp's, a whole bunch of second rate people reply, you pick the ones who "look" most competent. Then, surprise, surprise, it fails. This is not how hard things get done. Those most able at securing grants are almost certainly not those most able at delivering technical success.
>In just a couple of years the DARPA challenge yielded cars that can drive themselves in complex urban environments.
I imagine with sufficient smart people working on it, flying airplanes in relatively uncluttered sky would yield results faster.
As I mentioned, many of the tasks of flying airplanes have been successfully automated. However, there are significant complexities that pilots must deal with that don't apply to cars, and most of those still require human intervention, especially in-flight emergencies. If your car engine quits, you pull over to the shoulder, turn on your hazard lights, and call AAA. If your aircraft engine quits, the response is much more complex. If the power steering on your car quits, you do pretty much the same thing as described above. If your flight controls malfunction in an airplane, the response is much more complex. If your car catches on fire, you do the same as above, plus get out. If your airplane catches on fire, you've got a much bigger problem. I could go on, but I think that gets the idea across.
Also, the sky is not "relatively uncluttered." There are a lot of airplanes flying at any given moment, and most of them are concentrated onto airways. It gets even worse in the terminal area: lots and lots of planes coming and going to and from many different directions, all in a very small piece of sky.
>As for see-and-avoid, perhaps it works for obstacles on the ground, but other aircraft are moving so fast, is it really feasile to eyeball them before they are on you? Perhaps in pursuit, but at any significant angle they flash past at hundreds of miles an hour. Only radar etc has a chance of identifying/avoiding at those speeds.
I am alive today because, on countless occasions, I and my fellow aviators have looked outside, seen another aircraft, and maneuvered to avoid a potential collision. I think that you really don't have an accurate mental image of how this works, so I'll try to explain a little bit:
Consider two airliners cruising at 424 kts each, one heading due West, the other due North. The rate of closure is 600 kts. Depending on the atmospheric conditions, they will be visible to each other at about 10 nautical miles, which gives them an entire minute to spot each other and maneuver to avoid a collision. Even if it's two sueprsonic fighters flying right at each other at 600 kts each, they still have thirty seconds to spot each other. A much more realistic scenario would involve two aircraft in the terminal area, where they would be moving much more slowly, giving them much more time to see each other and respond.
I think the most dangerous situations are where two aircraft are on headings that are different by less than forty-five degrees: they are basically next to each other, closing from each other's sides where they are less likely to be spotted. The rate of closure is probably very low, but that's more than made up for by the awkward situation in regards to field of view from many cockpits.
I think that unmanned plane has actually a much better chance of surviving fire, it could have an inert atmosphere, or it could be unpressurized so fires would be much less frequent. Also fires happen mostly on freight planes (lately UPS and Asiana) and freight planes would be probably easier to certify for unmanned flying than passenger planes.
> I am alive today because, on countless occasions, I and my fellow aviators have looked outside, seen another aircraft, and maneuvered to avoid a potential collision.
Isn't this what Traffic collision avoidance system (TCAS) is for? The pilots already have to do as they are told by TCAS (after the collision over Switzerland). Surely managing traffic of obedient agents with known limits of performance isn't that hard - if all the planes were unmanned there wouldn't be problems
However, the points that you mentioned about other emergencies and passenger traffic still stand.
This would be very costly to implement, and probably very heavy. That being said, inert gases are used in places where arcing is likely (e.g. radar waveguides).
>or it could be unpressurized so fires would be much less frequent
I really don't think this would have a significant impact on the frequency of fires. Plus, a lot of avionics need particular environmental conditions, in some cases including pressurization.
>Also fires happen mostly on freight planes
High-power electronics, such as military radars, also pose an increased risk of fire.
>freight planes would be probably easier to certify for unmanned flying than passenger planes.
If you only cared about the contents of the plane, this would be true. But what happens when a flaming ball of wreckage that used to be an unmanned freight plane plows into a suburban neighborhood, a school, or a downtown skyscraper?
>Isn't this what Traffic collision avoidance system (TCAS) is for? The pilots already have to do as they are told by TCAS (after the collision over Switzerland).
TCAS isn't all it's cracked up to be. First of all, it only works if the other plane has a transponder (there are still plenty of light civil aircraft out there with no transponders). Second, there are different versions out there with different levels of accuracy. The older kind are not very accurate at all, and basically serve only to give the pilot a general idea of where to look in order to spot the traffic and avoid the collision the old-fashioned way. The more accurate kind only works if both aircraft involved have the necessary equipment.
As for the collision over Switzerland, the reason for that rule is that the collision happened in part because TCAS and ATC gave conflicting instructions: one told plane A to go up and plane B to go down, the other told plane A to go down and plane B to go up. One crew did what TCAS said, the other did what ATC said, and they both ended up descending. So the reason for this rule isn't that TCAS is a magical panacea for midairs, but rather a way to consistently resolve any future such conflicts between TCAS and ATC.
>Surely managing traffic of obedient agents with known limits of performance isn't that hard - if all the planes were unmanned there wouldn't be problems
If all the planes were unmanned, then managing traffic would be much easier, but that still leaves other issues, like where a plane ends up when it malfunctions and crashes. It would also require a wholesale changeover that simply isn't plausible.
> TCAS isn't all it's cracked up to be. First of all, it only works if the other plane has a transponder (there are still plenty of light civil aircraft out there with no transponders).
Why hasn't it been made mandatory that all aircrafts should have a standardized transponder ? It scares me a bit that, at the end of the day, we rely on pilots avoiding collisions by sight.
>Why hasn't it been made mandatory that all aircrafts should have a standardized transponder ?
In some parts of the world, it may be. For example, I don't know if Europe allows for aircraft without transponders. Even in the U.S. you must have a transponder to enter certain types of airspace (e.g. the airspace around major airports). As far as I know, no matter where you are in the world, you must have a transponder to fly IFR.
Installing a transponder is a non-trivial expense, especially in older aircraft (which are the aircraft most likely to not have transponders). For a lot of small aircraft in the U.S. this would represent an unnecessary burden on the owners. For example, crop dusters: they typically fly around low and slow in areas with very little traffic, under day VFR conditions, so they have no need to interact with ATC and therefore no real use for a transponder.
>It scares me a bit that, at the end of the day, we rely on pilots avoiding collisions by sight.
We don't rely solely on this: we have transponders (with TCAS in some cases), ATC radar, and (in some cases) airborne radar. All of these tools help us to avoid collisions. Unfortunately, none of them are 100% effective, and in most of the situations where they all fail, the good old Mark I Eyeball usually saves the day. See-and-avoid isn't perfect, either (if it was, we'd never have midairs), but it is still the most effective tool available for avoiding an impending collision.
Your objections are about things that a human can't do much about either, or if they could, a computer could do just as well or better.
(If your plane engine quits - what would a human do? whatever it is, a computer could be programmed to as well. Near collisions - a constantly vigilant computer vision system watching out the window is more likely to avoid collision than a pilot with 30 seconds of reaction time.)
>If your plane engine quits - what would a human do?
This depends on a lot of factors. Some of them are things that a computer can probably be programmed to consider properly (e.g. specific cause of failure). Other factors require judgment, such as your more general situation: depending on where you are and what else is going wrong, you might chose to land the airplane at the nearest appropriate airfield, or you might choose to continue on to an airfield farther away where you can get better support while attempting to restart the failed engine on your way there, or you might decide that you have to get it on the ground right now, and that empty farmer's field over there looks good enough.
In case of more catastrophic failures like fire, computers become even more problematic because the sensors they depend on for inputs can be damaged or destroyed, leaving them with insufficient information to act properly.
>Near collisions - a constantly vigilant computer vision system watching out the window is more likely to avoid collision than a pilot with 30 seconds of reaction time.
People with a lot of money and resources have been trying to develop a fully autonomous system for avoiding impending collisions. They will almost certainly eventually succeed, but so far they haven't even come close to being as visual scan by human pilots.
A billion dollars was spent by the eu in the 80s on self-driving cars. They didn't completely succeed. It looked like noone would succeed for 30 years. And yet, bang, when the competition's opened up, a couple of guys from stanford do it.
I believe the technology to solve the problem is out there, it's just a matter of the right people trying at it.
Computer vision is close to being solved. Look at kinect, kinect 2/google goggles. People inside google/microsoft are racing at this. I'm sorry, i have to disagree with your pessimistic attitude on this.
With regard to fire - fire can kill human pilots too. With sensors, you can create a multiply redundant system - put in 20 extra sensors. With humans it's not possible.
Nothing you say addresses my broader point that there are currently too many situations in aviation where the complexity of the decisions involved exceeds our current capacity for automation.
A billion dollars was spent by the eu in the 80s on self-driving cars. They didn't completely succeed. It looked like noone would succeed for 30 years. And yet, bang, when the competition's opened up, a couple of guys from stanford do it.
I believe the technology to solve the problem is out there, it's just a matter of the right people trying at it.
If they tried in the '80s and the guys from Stanford did it in the 2000's, then it was almost 30 years before anyone succeeded. I think that success had a lot more to do with technology maturing over time than it did with "the right people trying at it."
>Computer vision is close to being solved. Look at kinect, kinect 2/google goggles. People inside google/microsoft are racing at this.
This really depends on what you mean by "solved." Kinect is a hell of a long way from what you would need to avoid collisions in a 3D space. Kinect basically just has to deal with the outlines of objects at a relatively narrow set of distances. When your sensor is moving in three dimensions and you are trying to track an object that is also moving in three dimensions it gets a heck of a lot harder, even if you are using radar (which gives you range). If you're trying to figure out range based on the apparent size of an object of unknown actual size, it gets even harder.
>I'm sorry, i have to disagree with your pessimistic attitude on this.
I'm actually quite optimistic that it will happen, just not for many years yet.
>With regard to fire - fire can kill human pilots too. With sensors, you can create a multiply redundant system - put in 20 extra sensors. With humans it's not possible.
If your engine is out on a wing and it catches fire, the fire sensors will tell you so, and shortly thereafter they will most likely be destroyed and tell you nothing further. An engine on fire out on the wing is not going to burn up the pilot. The pilot can look out the window and quickly and easily assess the condition of the engine and the wing: did it burn out, or is it raging out of control, or maybe there are subtle signs that indicate something in-between? Maybe with enough fire sensors scattered all over the plane a computer could make a similar assessment, but you're talking about a lot of extra money and weight, and you still have the problem that your sensors are going to burn up shortly after going off and then you have no idea if the fire has gone away or if it has just stopped spreading. Someday maybe you can give the computer a camera to "look" at the wing to make the same kind of assessment that a human pilot can make, but that is a very long way off.
- computer vision problems (e.g. plane catches fire, how do you tell how much fire etc.)
- tracking other objects in 3d while in moving in 3d at high speed
The question is - can humans do this? If yes, computers can do it eventually. The only question is how long away is this. What we know from machine learning, is that data is important. If you can gather enough data you can do anything. So really, your problems are a question of data collection. It is not a technically difficult problem. (By the way, one of the problems you have in aerospace is, is that you're control theory heavy rather than pro-ai, which means you end up not being able to solve the difficult problems.)
Also, the eu 80s project ended around early 90s, and the grandle challenge win was only 15 years later, not 30. Had the challenge been tried 5 years earlier, it would have worked. The algorithms and hardware was already sufficient.
I hardly think it's pessimistic to say, "this problem is really hard, and it's going to take years to solve." I'm confident that they will be solved, which some of my peers might even consider a naively optimistic attitude.
>The question is - can humans do this? If yes, computers can do it eventually.
This logic is deeply, deeply flawed. I happen to believe that computers can eventually perform the tasks under discussion, but "humans can do it" is not one of the reasons why I believe that.
>So really, your problems are a question of data collection. It is not a technically difficult problem.
When it comes to tracking airborne targets with airborne radar, data collection actually is a technically difficult problem. The combination of waveform, antenna design, transmitter design, receiver design, tracker design, etc. present a set of engineering tradeoffs in effective range, range resolution, azimuth resolution, weight, size, flase positive and false negative rates on radar returns, and other performance characteristics. Even the very best airborne radars provide data which is limited, especially in terms of accuracy and precision.
>By the way, one of the problems you have in aerospace is, is that you're control theory heavy rather than pro-ai, which means you end up not being able to solve the difficult problems.
A little more about my background: my BS is in EE, with a specialization in microcomputer interfacing (I took a lot of CS classes). In grad school, my stability and control prof had actually done some pioneering work in incorporating non-linear logic into stability and control systems (don't try this at home, kids). In addition to stability and control, my other focus for my MS was avionics. The prof who taught most of my avionics classes was actually from the CS department (his undergraduate background was EE, with a specialization in radar). One of the things that kind of surprised me about aero, having come from EE, was how broad the discipline is. Before going back to school for aero, I thought that getting a degree in aeronautical engineering would be primarily about aerodynamics, with a smattering of other stuff. Instead, I discovered that everyone gets a little bit of everything (aerodynamics, propulsion, structures, stability and control, avionics), and then specializes in one or two particular areas. By the PhD level, people who have specialized in areas other than aerodynamics have largely forgotten most of what they learned about it as undergrads. It's an incredibly heterogeneous field: propulsion and structures guys have more in common with MechEs than with other AeroEs; avionics and stability and control guys have more in common with EEs than with other AeroEs; etc. So to characterize the discipline, or any individual within it, as "control theory heavy rather than pro-ai," displays a deep misunderstanding of the character of the community. I guarantee you that there are plenty of AI experts working in the aero field.
>Also, the eu 80s project ended around early 90s, and the grandle challenge win was only 15 years later, not 30. Had the challenge been tried 5 years earlier, it would have worked. The algorithms and hardware was already sufficient.
This just reinforces my broader point: success came not as a result of some innovative genius applying a novel new approach but rather because the technology had matured--over the course of several years--to the point where success had become not only possible but likely. Radar tracking is currently experiencing big advances for that same reason. The theory behind Space-Time Adaptive Processing (STAP) has been around for decades, but the available technology has not been up to the task of implementing it effectively. In the past we've resorted to less effective tracking methods such as MTI, but in the last decade or so the technology has finally made STAP reasonable to implement.
It seems like you are not well versed in ai. There was a debate about whether "hand engineering" vs "dumb simple algo's" would get results. Dumb algo's won. Your mentioning of MTI and STAP, and how difficult radar design is etc. etc., makes me think you aero guys are still in hand-engineering land.
I will offer another example. Why did the aerospace dudes not be able to autonomously fly a helicopter. in 2004, andrew ng decided to tackle this. He completely ignored any previous work, just using a dumb algo (reinforcement learning) and laptop managed to get amazing autonomous performance out it. Why was it him (ai researcher), and not people from the field of flying.
Not especially. I audited a course as an undergrad, and hardly remember anything from it now, but I have a layman's understanding of the basics.
>There was a debate about whether "hand engineering" vs "dumb simple algo's" would get results. Dumb algo's won.
There are design tasks for which "algorithms" are better suited, and there are design tasks where experienced human engineers still do far, far better. The statement "Dumb algo's won" is certainly true for some applications, but not all.
>Your mentioning of MTI and STAP, and how difficult radar design is etc. etc.,
Now it's my turn: you clearly are not well versed in aviation or in radar principles. Not every problem can be magically solved by throwing AI at it. Radar theory is well established, and the equations are well known. Unfortunately, they are hard equations to solve: determining the location of an object using radar involves some complicated math with a lot of variables, and the only way to solve those equations is to chew your way through them. You can simplify them, but then you have to accept increased errors from the terms you throw out.
Where "algorithms" come into play is tracking, and depending on how you define AI, radar engineers have been using AI since the invention of the first automated tracker. Even the very best automated trackers in existance today are not nearly as good as an experienced operator looking at raw returns. Someday that will probably change, but that day is still many years away.
>...makes me think you aero guys are still in hand-engineering land.
As I said before, you're making a huge mistake by lumping "you aero guys" into a single group. "Aeronautical engineering" is really "every other kind of engineering, applied to aviation." When I was in grad school, one of my buddies' thesis was pure AI: he developed a learning algorithm for choosing the optimum path for a jet to taxi around a crowded flight deck, using DGPS as the only position source. Another guy combined machine learning with CFD in an attempt to design better supersonic lifting surfaces (the results were not good, but his thesis was still a "success" in the sense that he expanded human knowledge and the general concept showed promise). There are some applications where AI is the way to go, and there are some applications where what you derisively call "hand engineering" is infinitely superior.
>I will offer another example. Why did the aerospace dudes not be able to autonomously fly a helicopter. in 2004, andrew ng decided to tackle this. He completely ignored any previous work, just using a dumb algo (reinforcement learning) and laptop managed to get amazing autonomous performance out it. Why was it him (ai researcher), and not people from the field of flying.
So the "aerospace dudes" were "able to autonomously fly a helicopter" before Andrew Ng was born, and they did it using "hand engineering."
Not autonomous enough for you? Firescout flew autonomously four years before Andrew Ng flew his helicotper autonomously: http://en.wikipedia.org/wiki/Firescout
Firescout was also developed by "people from the field of flying."
There's quite a difference between firescout and ng's helicopter. The latter gives superhuman performance (there are videos on his homepage, andrew ng stanford). Anyway, i don't know what algo firescout uses, it might well be ai behind the scenes proving my general point.
Your point about complicated mathematical equations lies at the root problem of you "unified engineering" guys. modern ai (machine learning) is where you give up on the assumption that you (puny human) can impart "wisdom" to your system. You simply throw a random set of equations (a neural network) that are large enough/not too large (overfitting) to capture physical reality. Getting the errors low is a matter of getting enough data and experimentally adjusting the size of your nnet.
Yes, there mught be grad students and profs trying ai to solve aero problems, however, if enough resources are not devoted, they will not yield good enough results. For example, spend a billion dollars (gathering data/computation) to solve your radar problem. A billion dollars in your field is pocket change.
>There's quite a difference between firescout and ng's helicopter. The latter gives superhuman performance (there are videos on his homepage, andrew ng stanford).
You have absolutely no idea what the performance and handling characteristics of Firescout are; in fact, it is apparent that you lack the domain knowledge to understand their meaning even if they were presented to you. Nevertheless, you assert without hesitation that Andrew Ng's helicopter is superior. This is the epitome of fanboyism.
>Your point about complicated mathematical equations lies at the root problem of you "unified engineering" guys.
You clearly don't know what you're talking about here. Go read Skolnik's Radar Handbook, then try to tell me with a straight face that random processes are going to derive those equations for you, and somehow magically come up with a way to sidestep the basic reality that they have to be solved.
>modern ai (machine learning) is where you give up on the assumption that you (puny human) can impart "wisdom" to your system. You simply throw a random set of equations (a neural network) that are large enough/not too large (overfitting) to capture physical reality. Getting the errors low is a matter of getting enough data and experimentally adjusting the size of your nnet.
Yes, there mught be grad students and profs trying ai to solve aero problems, however, if enough resources are not devoted, they will not yield good enough results. For example, spend a billion dollars (gathering data/computation) to solve your radar problem. A billion dollars in your field is pocket change.
In order to use a tool effectively, you have to understand both it's capabilities and it's limitations. Even though you have indicated that you are an expert on machine learning and I have admitted that I am not, it is now abundantly clear to me that you have absolutely no understanding of the limitations of machine learning.
If only every problem could be solved optimally by simply throwing enough data and a big enough net at it. Unfortunately, that's not how the real world works.
One problem is that machine learning algorithms often converge on local maximums that are far less optimal than is possible. The guy who worked on the taxi routes had enormous issues with this, and only after extensive tweaking was he able to come up with solutions that were on par with human path-choosing.
An even bigger problem stems from the fact that a machine learning algorithm is only as good as the model it works in. I mentioned that the guy working on lifting surfaces in CFD did not get great results. His problem was that his algorithms pretty much always found the places where the CFD models diverged from reality: they would find the optimum shapes for the model they were working within, but those shapes always performed terribly in the wind tunnel because the algorithm was finding optimums at points where the model diverged significantly from reality. You can't solve this problem with "better models," because every model diverges from reality. If a model doesn't diverge from reality, it's no longer a model, it's reality. Where he really impressed his review board was when he detailed a follow-on experiment of using this phenomenon to develop better CFD models, within which human engineers would be able to come up with better designs.
Finally, a billion dollars is not "pocket change" in any field. Even if someone had a spare billion dollars laying around to fund R&D for radar tracking, the opportunity costs of blowing it on a machine learning experiment, instead of using to fund experienced engineers working from proven principles, would be unacceptably high.
It is quite clear that you are a person who likes to revel in appeal to authority arguments and casually throw off insults. Throwing a textbook, or your phd buddy's anecdotes in my face does not negate what i say.
Of course, dr andrew ng, head of the stanford ai lab is pussying around with his autonomous helicopter, after all problems were apparently solved by your defence contractor buddies in the 60s. The fact that helicopter pilots still exist is because society is too rich and we need to lighten our wallets. There, i mirrored your appeal to authority argument.
Mirroring your insults, it's clear you don't know what you're talking about. The fact that you've somehow been granted a doctorate further confirms my suspicions about the quality of education these days. The simplicity/non-pioneering-ness of your phd buddies's theses' is further confirmation of that. And the fact that you have been assigned to evaluate important technologies in your sector says a lot about the general competence level in it.
Of course searching can result in local minima. -That Exactly- is why you have to keep to keep running computers and getting more data. You can keep chanting to yourself - i am clever, i am clever, i write equations - and tell everyone the problem is difficult, years out from solution - or you can switch on the damn computers and let them find your answer.
If a billion dollars is too much for a system that can finally allow you to have autonomous planes, that you hope somehow your big brains will solve it, despite not having done so for a few decades, means that you, or your industry, does not have a clear grasp of the meaning of the term opportunity cost.
>It is quite clear that you are a person who likes to revel in appeal to authority arguments and casually throw off insults. Throwing a textbook, or your phd buddy's anecdotes in my face does not negate what i say.
I did not cite the textbook as an appeal to authority: I cited it because you repeatedly demonstrated that you don't understand what I'm saying, and kept making ludicrous arguments as a result, and reading that book (or a similar one) would be the only way for you to gain the necessary domain knowledge in order to say something meaningful on this subject.
Similarly, I raised the issues of my peer's thesis work not as an "appeal to authority," but as a concrete example of the limitations of machine learning as an engineering tool.
You, on the other hand, used Andrew Ng's repeatedly as an appeal to authority. The worst part is that your primary example was factually incorrect: you initially stated that he was some kind of wunder-kind who was able to easily solve a problem that had supposedly been impossible for regular aero engineers to solve; when I pointed out that regular aero engineers had, in fact, solved the problem two decades before his birth, you responded with the absurd claim that his work was somehow superior, despite a complete lack of evidence to support that position.
Moreover, saying, "you do not know what you are talking about on this subject" is not an insult. I tried saying it more subtly at first, with attempts to fill some of the gaps in your domain knowledge, and yet you persisted in making arguments based on terribly insufficient knowledge of the subject under discussion, so I came out and said it explicitly. When I did so, I even provided a text you could read in order to correct your ignorance, but you chose to reject that as "appeal to authority."
>The fact that you've somehow been granted a doctorate further confirms my suspicions about the quality of education these days.
I never claimed to have a PhD. I have clearly stated that I have a MS in Aero.
>The simplicity/non-pioneering-ness of your phd buddies's theses' is further confirmation of that.
Just as you claimed that Andrew Ng's helicopter was somehow superior to other autonomous helicopters, even though you know nothing about those other helicopters, you now claim that the graduate thesis of two complete strangers are "simple" and "non-pioneering" based on a few sentences I wrote. Throughout this conversation, you have displayed this habit of reaching unreasonable conclusions based on insufficient evidence. Your arguments would be much more plausible of you would get rid of this habit.
>Of course searching can result in local minima. -That Exactly- is why you have to keep to keep running computers and getting more data. You can keep chanting to yourself - i am clever, i am clever, i write equations - and tell everyone the problem is difficult, years out from solution - or you can switch on the damn computers and let them find your answer.
This sums up the fundamental problem with your views. I have stated repeatedly that machine learning has its uses, but that it also has its limits, and that many aspects of engineering and design are still best conducted by human beings. I have given several examples to demonstrate this. You have this inexplicable faith that any problem can be solved just by throwing enough data and computers at it. It would be wonderful if only all engineering problems were that easy to solve. Unfortunately, it's just not true. If it were true, people would be disrupting the industry en masse by having computers design superior products faster and cheaper than human engineers can. You even add a touch of "No True Scotsman" to your reasoning: if you don't get magical results from your machine learning, it must be because you're doing it wrong: not enough data, or not enough computers, or you didn't spend enough time tweaking it; just throw more time and money at it, and then you'll get the answer.
Ok, i apologize if i have misjudged your intentions, that you weren't trying to insult me. I went a little overboard there, i'm sure your friends and you are competent people deserving of your statuses.
Yes, you are correct, I have come to a viewpoint that all problems can be solved by throwing enough computers and data at it. This was informed by arguments in ai. See for example, jurgen schmidhuber's website, or genetic programming at john koza's website.
I agree, many problems are in a sense "easy", and human's can solve them. However, my belief is that those are the problems that have already been solved. The difficult problems, the ones that have not yet been solved, might well be too difficult for humans to comprehend. Computation is cheap enough now that it should be the default first step to try to brute force a solution. Even in high school, teach students how to describe problems as an optimization. Don't bother teaching them equation solving. Analytical solutions are sometimes needed, let that be a specialization for advanced undergraduates or even graduate school.
This is kind of like the reductionism vs non-reductionism argument. In physics simple laws were discovered, however, in biology this will not be possible.
>Ok, i apologize if i have misjudged your intentions, that you weren't trying to insult me. I went a little overboard there, i'm sure your friends and you are competent people deserving of your statuses.
An I apologize as well, as my statements clearly could have been made in a more conciliatory tone.
>Yes, you are correct, I have come to a viewpoint that all problems can be solved by throwing enough computers and data at it.
This may eventually become true, but we still have a lot of progress before we get there. I suspect that when it does become true, computers will look a lot more like animal brains than the computers of today, or maybe they will look like something completely different from either.
>This was informed by arguments in ai. See for example, jurgen schmidhuber's website, or genetic programming at john koza's website.
After a very cursory look, it appears that Dr. Koza has a very pragmatic attitude about genetic algorithms and is well aware of their limitations, and thus choses to focus his efforts on areas where they are most applicable. On the other hand, Dr. Schmidhuber seems to have staked his legacy on the idea that computers will soon be able to solve absolutely any problem better than humans can, and is passionately trying to spread this vision. He may very well be proven correct in the end, but I tend to be deeply skeptical of predictions made by such visionaries.
>I agree, many problems are in a sense "easy", and human's can solve them.
Some problems that are easy for humans are hard for computers, and some problems that are easy for computers are hard for humans. I assume that's why you put "easy" in quotes: problems that are "easy" for humans.
>However, my belief is that those are the problems that have already been solved. The difficult problems, the ones that have not yet been solved, might well be too difficult for humans to comprehend.
We may very well eventually reach a point where we truly have solved all of the problems within our capacity as humans, but that day is so far off that everyone alive today will be ancient history by then. Often, solving one problem reveals several more interesting problems that we hadn't even considered before.
>Computation is cheap enough now that it should be the default first step to try to brute force a solution.
This may be true for some classes of problems, but it is still not true for many, and will never be true for some. Consider cryptography: "brute force" only works if you have considerably more computation power than the computer(s) used to perform the encryption in the first place. However, if you can find a flaw in the encrption scheme to exploit, you might even be able to get by with less computation power.
>Even in high school, teach students how to describe problems as an optimization. Don't bother teaching them equation solving. Analytical solutions are sometimes needed, let that be a specialization for advanced undergraduates or even graduate school.
This would be a very, very bad idea. In order to use a tool like machine learning properly, you need a solid understanding of the problems you are trying to solve, so that you can frame them properly for the machine. Furthermore, there are many pragmatic real-world problems that require analytical solutions, much more than could be addressed by a small body of specialists. I think that basic programming concepts like iteration and recursion should be taught in secondary school (possibly even primary school), and I could definitely see adding basic concepts of optimization to that curriculum, but analytical thinking is so critical that taking it out would be an enormous mistake.
When is say analytical, i mean in the sense that mathematics can be used. By optimization, i mean a set of parameters that are tuned by minimizing a function with computers.
On computational power, we'll have to to agree to disagree. I believe self-driving cars and ng's helicopter are examples of why i think computing power is sufficient, there are many others (how kinect,google goggles,ibm watson etc etc. were built. There are also the "humie" awards in genetic programming which compete with traditional engineering). In engineering, finite element analysis has taken over. It is more human intensive than straight up machine learning but it's an example nevertheless of compute power displacing humans. I expect finite element analysis to be overtaken by machine learning too in many of it's applications as awareness/trust in machine learning gains mindshare.
Really, my basic point is, forget what you know, start afresh, and put faith in brute force searches. In natural language processing this happened 20 years ago and they made great progress, in computer vision this is happening right now, and these two fields are the toughest in my humble opinion. What they do, surely other fields can learn from.
The author provides some system solutions that I agree with, but which I doubt would be implemented any time soon on a large scale. That leaves me asking other questions:
What can I do, as a parent, if I see that my children are not being taught math at the proper pace? I could tutor my children at home to a certain extent, but that just raises all sorts of other questions:
I've got a solid math background, but no education background, so what kinds of resources are available to me to establish an effective home tutoring program?
How can I tell if the pace I am setting is too fast, too slow, or just right?
In the unlikely event that one of my children is a "math outlier," my knowledge of math, although in the 90th (95th? 99th?) percentile, would prove woefully inadequate: where would I find an (affordable) math tutor with comprehensive knowledge of math?
This last question is the only one I think I have a decent answer for: find a mathematics graduate student looking to earn some money on the side.
I tutor my 4 year old daughter in math using Singapore Math texts (http://www.singaporemath.com/). We are working with the kindergarten books and I think my daughter is doing OK. I think that everybody who values math education should do something like that with their children. The gain is just too big to ignore. My daughter is already thinking about addition and while she doesn't yet remember the addition facts she has no difficulty posing word problems as addition problems.
The books are not very difficult for parents to understand and give you a baseline that you can follow very closely. Also, I hope that since it is unlikely that my child will do the same book in school I do not run the risk that the she will refuse to do math in school since she has already done the book.
Since you have the book you can set the pace based on how difficult the lesson of the day seems for the child. You can do one page per week or 10 pages per day (both these things have happened to me). Of course there can be several levels of understandings of the same lesson and in my case I usually am happy with the lowest level. To correct for that I do sequentially 2 different books that have the same material (Singapore Math provides multiple books for the same level). I skipped some chapters about weights and volumes since these seemed too involved for my daughter (3 at the time), but I have done everything else that is on these books.
I must say that until now this has been a wonderful experience for me. I have never needed to ask my daughter to do math, anytime she sees me free she asks for it herself. And almost always I am the one who tries to cut the lesson short, making sure that next day she will want to come back wanting more.
I have a kindle, and I very much like the convenience of being able to manage my library by easily moving books on and off of the kindle wherever I am. However, any time I buy a book from Amazon I first check if a DRM-free alternative is available elsewhere. I don't do this because of principled opposition to DRM (I am opposed to DRM on principle, but it's not why I do this); I do it because I want my books to be as portable as possible, not locked in to any one merchant, platform, or device. Unfortunately, in most cases the only option is a file with some other form of DRM that won't work on my kindle. If the big publishers stopped using DRM, I would stop buying ebooks from Amazon and buy all of them directly from the publishers.
If such were to happen, I'm sure that other people would offer cloud services that would allow you to upload your eBooks from wherever you buy them and then access them from your device over your choice of wireless carrier. If they lost enough market share, I'm sure that even Amazon would start offering such a service.