I hardly think it's pessimistic to say, "this problem is really hard, and it's going to take years to solve." I'm confident that they will be solved, which some of my peers might even consider a naively optimistic attitude.
>The question is - can humans do this? If yes, computers can do it eventually.
This logic is deeply, deeply flawed. I happen to believe that computers can eventually perform the tasks under discussion, but "humans can do it" is not one of the reasons why I believe that.
>So really, your problems are a question of data collection. It is not a technically difficult problem.
When it comes to tracking airborne targets with airborne radar, data collection actually is a technically difficult problem. The combination of waveform, antenna design, transmitter design, receiver design, tracker design, etc. present a set of engineering tradeoffs in effective range, range resolution, azimuth resolution, weight, size, flase positive and false negative rates on radar returns, and other performance characteristics. Even the very best airborne radars provide data which is limited, especially in terms of accuracy and precision.
>By the way, one of the problems you have in aerospace is, is that you're control theory heavy rather than pro-ai, which means you end up not being able to solve the difficult problems.
A little more about my background: my BS is in EE, with a specialization in microcomputer interfacing (I took a lot of CS classes). In grad school, my stability and control prof had actually done some pioneering work in incorporating non-linear logic into stability and control systems (don't try this at home, kids). In addition to stability and control, my other focus for my MS was avionics. The prof who taught most of my avionics classes was actually from the CS department (his undergraduate background was EE, with a specialization in radar). One of the things that kind of surprised me about aero, having come from EE, was how broad the discipline is. Before going back to school for aero, I thought that getting a degree in aeronautical engineering would be primarily about aerodynamics, with a smattering of other stuff. Instead, I discovered that everyone gets a little bit of everything (aerodynamics, propulsion, structures, stability and control, avionics), and then specializes in one or two particular areas. By the PhD level, people who have specialized in areas other than aerodynamics have largely forgotten most of what they learned about it as undergrads. It's an incredibly heterogeneous field: propulsion and structures guys have more in common with MechEs than with other AeroEs; avionics and stability and control guys have more in common with EEs than with other AeroEs; etc. So to characterize the discipline, or any individual within it, as "control theory heavy rather than pro-ai," displays a deep misunderstanding of the character of the community. I guarantee you that there are plenty of AI experts working in the aero field.
>Also, the eu 80s project ended around early 90s, and the grandle challenge win was only 15 years later, not 30. Had the challenge been tried 5 years earlier, it would have worked. The algorithms and hardware was already sufficient.
This just reinforces my broader point: success came not as a result of some innovative genius applying a novel new approach but rather because the technology had matured--over the course of several years--to the point where success had become not only possible but likely. Radar tracking is currently experiencing big advances for that same reason. The theory behind Space-Time Adaptive Processing (STAP) has been around for decades, but the available technology has not been up to the task of implementing it effectively. In the past we've resorted to less effective tracking methods such as MTI, but in the last decade or so the technology has finally made STAP reasonable to implement.
It seems like you are not well versed in ai. There was a debate about whether "hand engineering" vs "dumb simple algo's" would get results. Dumb algo's won. Your mentioning of MTI and STAP, and how difficult radar design is etc. etc., makes me think you aero guys are still in hand-engineering land.
I will offer another example. Why did the aerospace dudes not be able to autonomously fly a helicopter. in 2004, andrew ng decided to tackle this. He completely ignored any previous work, just using a dumb algo (reinforcement learning) and laptop managed to get amazing autonomous performance out it. Why was it him (ai researcher), and not people from the field of flying.
Not especially. I audited a course as an undergrad, and hardly remember anything from it now, but I have a layman's understanding of the basics.
>There was a debate about whether "hand engineering" vs "dumb simple algo's" would get results. Dumb algo's won.
There are design tasks for which "algorithms" are better suited, and there are design tasks where experienced human engineers still do far, far better. The statement "Dumb algo's won" is certainly true for some applications, but not all.
>Your mentioning of MTI and STAP, and how difficult radar design is etc. etc.,
Now it's my turn: you clearly are not well versed in aviation or in radar principles. Not every problem can be magically solved by throwing AI at it. Radar theory is well established, and the equations are well known. Unfortunately, they are hard equations to solve: determining the location of an object using radar involves some complicated math with a lot of variables, and the only way to solve those equations is to chew your way through them. You can simplify them, but then you have to accept increased errors from the terms you throw out.
Where "algorithms" come into play is tracking, and depending on how you define AI, radar engineers have been using AI since the invention of the first automated tracker. Even the very best automated trackers in existance today are not nearly as good as an experienced operator looking at raw returns. Someday that will probably change, but that day is still many years away.
>...makes me think you aero guys are still in hand-engineering land.
As I said before, you're making a huge mistake by lumping "you aero guys" into a single group. "Aeronautical engineering" is really "every other kind of engineering, applied to aviation." When I was in grad school, one of my buddies' thesis was pure AI: he developed a learning algorithm for choosing the optimum path for a jet to taxi around a crowded flight deck, using DGPS as the only position source. Another guy combined machine learning with CFD in an attempt to design better supersonic lifting surfaces (the results were not good, but his thesis was still a "success" in the sense that he expanded human knowledge and the general concept showed promise). There are some applications where AI is the way to go, and there are some applications where what you derisively call "hand engineering" is infinitely superior.
>I will offer another example. Why did the aerospace dudes not be able to autonomously fly a helicopter. in 2004, andrew ng decided to tackle this. He completely ignored any previous work, just using a dumb algo (reinforcement learning) and laptop managed to get amazing autonomous performance out it. Why was it him (ai researcher), and not people from the field of flying.
So the "aerospace dudes" were "able to autonomously fly a helicopter" before Andrew Ng was born, and they did it using "hand engineering."
Not autonomous enough for you? Firescout flew autonomously four years before Andrew Ng flew his helicotper autonomously: http://en.wikipedia.org/wiki/Firescout
Firescout was also developed by "people from the field of flying."
There's quite a difference between firescout and ng's helicopter. The latter gives superhuman performance (there are videos on his homepage, andrew ng stanford). Anyway, i don't know what algo firescout uses, it might well be ai behind the scenes proving my general point.
Your point about complicated mathematical equations lies at the root problem of you "unified engineering" guys. modern ai (machine learning) is where you give up on the assumption that you (puny human) can impart "wisdom" to your system. You simply throw a random set of equations (a neural network) that are large enough/not too large (overfitting) to capture physical reality. Getting the errors low is a matter of getting enough data and experimentally adjusting the size of your nnet.
Yes, there mught be grad students and profs trying ai to solve aero problems, however, if enough resources are not devoted, they will not yield good enough results. For example, spend a billion dollars (gathering data/computation) to solve your radar problem. A billion dollars in your field is pocket change.
>There's quite a difference between firescout and ng's helicopter. The latter gives superhuman performance (there are videos on his homepage, andrew ng stanford).
You have absolutely no idea what the performance and handling characteristics of Firescout are; in fact, it is apparent that you lack the domain knowledge to understand their meaning even if they were presented to you. Nevertheless, you assert without hesitation that Andrew Ng's helicopter is superior. This is the epitome of fanboyism.
>Your point about complicated mathematical equations lies at the root problem of you "unified engineering" guys.
You clearly don't know what you're talking about here. Go read Skolnik's Radar Handbook, then try to tell me with a straight face that random processes are going to derive those equations for you, and somehow magically come up with a way to sidestep the basic reality that they have to be solved.
>modern ai (machine learning) is where you give up on the assumption that you (puny human) can impart "wisdom" to your system. You simply throw a random set of equations (a neural network) that are large enough/not too large (overfitting) to capture physical reality. Getting the errors low is a matter of getting enough data and experimentally adjusting the size of your nnet.
Yes, there mught be grad students and profs trying ai to solve aero problems, however, if enough resources are not devoted, they will not yield good enough results. For example, spend a billion dollars (gathering data/computation) to solve your radar problem. A billion dollars in your field is pocket change.
In order to use a tool effectively, you have to understand both it's capabilities and it's limitations. Even though you have indicated that you are an expert on machine learning and I have admitted that I am not, it is now abundantly clear to me that you have absolutely no understanding of the limitations of machine learning.
If only every problem could be solved optimally by simply throwing enough data and a big enough net at it. Unfortunately, that's not how the real world works.
One problem is that machine learning algorithms often converge on local maximums that are far less optimal than is possible. The guy who worked on the taxi routes had enormous issues with this, and only after extensive tweaking was he able to come up with solutions that were on par with human path-choosing.
An even bigger problem stems from the fact that a machine learning algorithm is only as good as the model it works in. I mentioned that the guy working on lifting surfaces in CFD did not get great results. His problem was that his algorithms pretty much always found the places where the CFD models diverged from reality: they would find the optimum shapes for the model they were working within, but those shapes always performed terribly in the wind tunnel because the algorithm was finding optimums at points where the model diverged significantly from reality. You can't solve this problem with "better models," because every model diverges from reality. If a model doesn't diverge from reality, it's no longer a model, it's reality. Where he really impressed his review board was when he detailed a follow-on experiment of using this phenomenon to develop better CFD models, within which human engineers would be able to come up with better designs.
Finally, a billion dollars is not "pocket change" in any field. Even if someone had a spare billion dollars laying around to fund R&D for radar tracking, the opportunity costs of blowing it on a machine learning experiment, instead of using to fund experienced engineers working from proven principles, would be unacceptably high.
It is quite clear that you are a person who likes to revel in appeal to authority arguments and casually throw off insults. Throwing a textbook, or your phd buddy's anecdotes in my face does not negate what i say.
Of course, dr andrew ng, head of the stanford ai lab is pussying around with his autonomous helicopter, after all problems were apparently solved by your defence contractor buddies in the 60s. The fact that helicopter pilots still exist is because society is too rich and we need to lighten our wallets. There, i mirrored your appeal to authority argument.
Mirroring your insults, it's clear you don't know what you're talking about. The fact that you've somehow been granted a doctorate further confirms my suspicions about the quality of education these days. The simplicity/non-pioneering-ness of your phd buddies's theses' is further confirmation of that. And the fact that you have been assigned to evaluate important technologies in your sector says a lot about the general competence level in it.
Of course searching can result in local minima. -That Exactly- is why you have to keep to keep running computers and getting more data. You can keep chanting to yourself - i am clever, i am clever, i write equations - and tell everyone the problem is difficult, years out from solution - or you can switch on the damn computers and let them find your answer.
If a billion dollars is too much for a system that can finally allow you to have autonomous planes, that you hope somehow your big brains will solve it, despite not having done so for a few decades, means that you, or your industry, does not have a clear grasp of the meaning of the term opportunity cost.
>It is quite clear that you are a person who likes to revel in appeal to authority arguments and casually throw off insults. Throwing a textbook, or your phd buddy's anecdotes in my face does not negate what i say.
I did not cite the textbook as an appeal to authority: I cited it because you repeatedly demonstrated that you don't understand what I'm saying, and kept making ludicrous arguments as a result, and reading that book (or a similar one) would be the only way for you to gain the necessary domain knowledge in order to say something meaningful on this subject.
Similarly, I raised the issues of my peer's thesis work not as an "appeal to authority," but as a concrete example of the limitations of machine learning as an engineering tool.
You, on the other hand, used Andrew Ng's repeatedly as an appeal to authority. The worst part is that your primary example was factually incorrect: you initially stated that he was some kind of wunder-kind who was able to easily solve a problem that had supposedly been impossible for regular aero engineers to solve; when I pointed out that regular aero engineers had, in fact, solved the problem two decades before his birth, you responded with the absurd claim that his work was somehow superior, despite a complete lack of evidence to support that position.
Moreover, saying, "you do not know what you are talking about on this subject" is not an insult. I tried saying it more subtly at first, with attempts to fill some of the gaps in your domain knowledge, and yet you persisted in making arguments based on terribly insufficient knowledge of the subject under discussion, so I came out and said it explicitly. When I did so, I even provided a text you could read in order to correct your ignorance, but you chose to reject that as "appeal to authority."
>The fact that you've somehow been granted a doctorate further confirms my suspicions about the quality of education these days.
I never claimed to have a PhD. I have clearly stated that I have a MS in Aero.
>The simplicity/non-pioneering-ness of your phd buddies's theses' is further confirmation of that.
Just as you claimed that Andrew Ng's helicopter was somehow superior to other autonomous helicopters, even though you know nothing about those other helicopters, you now claim that the graduate thesis of two complete strangers are "simple" and "non-pioneering" based on a few sentences I wrote. Throughout this conversation, you have displayed this habit of reaching unreasonable conclusions based on insufficient evidence. Your arguments would be much more plausible of you would get rid of this habit.
>Of course searching can result in local minima. -That Exactly- is why you have to keep to keep running computers and getting more data. You can keep chanting to yourself - i am clever, i am clever, i write equations - and tell everyone the problem is difficult, years out from solution - or you can switch on the damn computers and let them find your answer.
This sums up the fundamental problem with your views. I have stated repeatedly that machine learning has its uses, but that it also has its limits, and that many aspects of engineering and design are still best conducted by human beings. I have given several examples to demonstrate this. You have this inexplicable faith that any problem can be solved just by throwing enough data and computers at it. It would be wonderful if only all engineering problems were that easy to solve. Unfortunately, it's just not true. If it were true, people would be disrupting the industry en masse by having computers design superior products faster and cheaper than human engineers can. You even add a touch of "No True Scotsman" to your reasoning: if you don't get magical results from your machine learning, it must be because you're doing it wrong: not enough data, or not enough computers, or you didn't spend enough time tweaking it; just throw more time and money at it, and then you'll get the answer.
Ok, i apologize if i have misjudged your intentions, that you weren't trying to insult me. I went a little overboard there, i'm sure your friends and you are competent people deserving of your statuses.
Yes, you are correct, I have come to a viewpoint that all problems can be solved by throwing enough computers and data at it. This was informed by arguments in ai. See for example, jurgen schmidhuber's website, or genetic programming at john koza's website.
I agree, many problems are in a sense "easy", and human's can solve them. However, my belief is that those are the problems that have already been solved. The difficult problems, the ones that have not yet been solved, might well be too difficult for humans to comprehend. Computation is cheap enough now that it should be the default first step to try to brute force a solution. Even in high school, teach students how to describe problems as an optimization. Don't bother teaching them equation solving. Analytical solutions are sometimes needed, let that be a specialization for advanced undergraduates or even graduate school.
This is kind of like the reductionism vs non-reductionism argument. In physics simple laws were discovered, however, in biology this will not be possible.
>Ok, i apologize if i have misjudged your intentions, that you weren't trying to insult me. I went a little overboard there, i'm sure your friends and you are competent people deserving of your statuses.
An I apologize as well, as my statements clearly could have been made in a more conciliatory tone.
>Yes, you are correct, I have come to a viewpoint that all problems can be solved by throwing enough computers and data at it.
This may eventually become true, but we still have a lot of progress before we get there. I suspect that when it does become true, computers will look a lot more like animal brains than the computers of today, or maybe they will look like something completely different from either.
>This was informed by arguments in ai. See for example, jurgen schmidhuber's website, or genetic programming at john koza's website.
After a very cursory look, it appears that Dr. Koza has a very pragmatic attitude about genetic algorithms and is well aware of their limitations, and thus choses to focus his efforts on areas where they are most applicable. On the other hand, Dr. Schmidhuber seems to have staked his legacy on the idea that computers will soon be able to solve absolutely any problem better than humans can, and is passionately trying to spread this vision. He may very well be proven correct in the end, but I tend to be deeply skeptical of predictions made by such visionaries.
>I agree, many problems are in a sense "easy", and human's can solve them.
Some problems that are easy for humans are hard for computers, and some problems that are easy for computers are hard for humans. I assume that's why you put "easy" in quotes: problems that are "easy" for humans.
>However, my belief is that those are the problems that have already been solved. The difficult problems, the ones that have not yet been solved, might well be too difficult for humans to comprehend.
We may very well eventually reach a point where we truly have solved all of the problems within our capacity as humans, but that day is so far off that everyone alive today will be ancient history by then. Often, solving one problem reveals several more interesting problems that we hadn't even considered before.
>Computation is cheap enough now that it should be the default first step to try to brute force a solution.
This may be true for some classes of problems, but it is still not true for many, and will never be true for some. Consider cryptography: "brute force" only works if you have considerably more computation power than the computer(s) used to perform the encryption in the first place. However, if you can find a flaw in the encrption scheme to exploit, you might even be able to get by with less computation power.
>Even in high school, teach students how to describe problems as an optimization. Don't bother teaching them equation solving. Analytical solutions are sometimes needed, let that be a specialization for advanced undergraduates or even graduate school.
This would be a very, very bad idea. In order to use a tool like machine learning properly, you need a solid understanding of the problems you are trying to solve, so that you can frame them properly for the machine. Furthermore, there are many pragmatic real-world problems that require analytical solutions, much more than could be addressed by a small body of specialists. I think that basic programming concepts like iteration and recursion should be taught in secondary school (possibly even primary school), and I could definitely see adding basic concepts of optimization to that curriculum, but analytical thinking is so critical that taking it out would be an enormous mistake.
When is say analytical, i mean in the sense that mathematics can be used. By optimization, i mean a set of parameters that are tuned by minimizing a function with computers.
On computational power, we'll have to to agree to disagree. I believe self-driving cars and ng's helicopter are examples of why i think computing power is sufficient, there are many others (how kinect,google goggles,ibm watson etc etc. were built. There are also the "humie" awards in genetic programming which compete with traditional engineering). In engineering, finite element analysis has taken over. It is more human intensive than straight up machine learning but it's an example nevertheless of compute power displacing humans. I expect finite element analysis to be overtaken by machine learning too in many of it's applications as awareness/trust in machine learning gains mindshare.
Really, my basic point is, forget what you know, start afresh, and put faith in brute force searches. In natural language processing this happened 20 years ago and they made great progress, in computer vision this is happening right now, and these two fields are the toughest in my humble opinion. What they do, surely other fields can learn from.
>The question is - can humans do this? If yes, computers can do it eventually.
This logic is deeply, deeply flawed. I happen to believe that computers can eventually perform the tasks under discussion, but "humans can do it" is not one of the reasons why I believe that.
>So really, your problems are a question of data collection. It is not a technically difficult problem.
When it comes to tracking airborne targets with airborne radar, data collection actually is a technically difficult problem. The combination of waveform, antenna design, transmitter design, receiver design, tracker design, etc. present a set of engineering tradeoffs in effective range, range resolution, azimuth resolution, weight, size, flase positive and false negative rates on radar returns, and other performance characteristics. Even the very best airborne radars provide data which is limited, especially in terms of accuracy and precision.
>By the way, one of the problems you have in aerospace is, is that you're control theory heavy rather than pro-ai, which means you end up not being able to solve the difficult problems.
A little more about my background: my BS is in EE, with a specialization in microcomputer interfacing (I took a lot of CS classes). In grad school, my stability and control prof had actually done some pioneering work in incorporating non-linear logic into stability and control systems (don't try this at home, kids). In addition to stability and control, my other focus for my MS was avionics. The prof who taught most of my avionics classes was actually from the CS department (his undergraduate background was EE, with a specialization in radar). One of the things that kind of surprised me about aero, having come from EE, was how broad the discipline is. Before going back to school for aero, I thought that getting a degree in aeronautical engineering would be primarily about aerodynamics, with a smattering of other stuff. Instead, I discovered that everyone gets a little bit of everything (aerodynamics, propulsion, structures, stability and control, avionics), and then specializes in one or two particular areas. By the PhD level, people who have specialized in areas other than aerodynamics have largely forgotten most of what they learned about it as undergrads. It's an incredibly heterogeneous field: propulsion and structures guys have more in common with MechEs than with other AeroEs; avionics and stability and control guys have more in common with EEs than with other AeroEs; etc. So to characterize the discipline, or any individual within it, as "control theory heavy rather than pro-ai," displays a deep misunderstanding of the character of the community. I guarantee you that there are plenty of AI experts working in the aero field.
>Also, the eu 80s project ended around early 90s, and the grandle challenge win was only 15 years later, not 30. Had the challenge been tried 5 years earlier, it would have worked. The algorithms and hardware was already sufficient.
This just reinforces my broader point: success came not as a result of some innovative genius applying a novel new approach but rather because the technology had matured--over the course of several years--to the point where success had become not only possible but likely. Radar tracking is currently experiencing big advances for that same reason. The theory behind Space-Time Adaptive Processing (STAP) has been around for decades, but the available technology has not been up to the task of implementing it effectively. In the past we've resorted to less effective tracking methods such as MTI, but in the last decade or so the technology has finally made STAP reasonable to implement.