Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What to study to get into robotics? [closed]

Tags:

robotics

What should someone study at university level if he/she wants to get into robotics and build robotics? So far 'Mechatronics' seems to be the field I'm looking for? I looked at a few plain 'robotics' courses but they seem to be only about the electrical and computer work, and don't include any details on building the mechanical components of robots?

like image 945
Ali Avatar asked Oct 31 '10 14:10

Ali


People also ask

What should you study for robotics?

Robotics integrates fields of mechanical engineering, electrical engineering, information engineering, mechatronics, electronics, bioengineering, computer engineering, control engineering, software engineering, among others.

How do I break into the robotics industry?

Reach out to online communities or find robotics enthusiasts in your area. The amount of formal education you'll have to pursue often depends on your field and job. Techs often only need a two-year degree while Engineers will need a four-year degree. Choose Your Specialty.


1 Answers

I'm a professional robotics research consultant, with 30 years of experience working for organizations like SRI International and JPL.

Like computers, robotics has quite a strong divide between the software and the hardware. Hardware is further subdivided into actuators and sensors.

If you'd said "I want to get into computers", I would explain that only a few hardware engineers actually design and build physical computers--most researchers assume that the hardware and firmware has been built already, and then they worry about the software--how to make the system actually work.

Similarly with robots, building the hardware is a job for the mechanical engineers (to design the structure and heat dissipation), with little bits and pieces for power electrical engineers (to spec the motors) and computer engineers (to design the firmware silicon). Next-generation robots also use industrial designers (to make the outsides look pretty, and the insides fit well together).

Research areas for actuator design include fingered hands; tentacles; hummingbird and other bird and insect wings; springy wheels; legs; non-electronic designs for high radiation areas; and surgical instruments.

With cameras in every cell phone, vision sensors are mostly a solved problem at this point. Research areas for sensor design include smart flexible tactile skin, brain wave sensors, and other biomedical sensors. There's still some room for good force sensors as well. These fall in the realms of materials engineering, computer engineering, mechanical engineering, and biomedical engineering.

In order to drive the actuators properly so they don't shake themselves apart, you need a control-theory engineer. Start with Fourier transforms so that you can then understand z-transforms. The learning curve on this mathematics is extremely steep, and careers are quite few, so either you have to be born to be a controls engineer or you should let someone else handle these lower-level details for you.

Signal processing, for the medium- and low-level sensor drivers, has been under the domain of the EEs historically. This works its way up to image processing, which falls under computer science, and then image understanding, which is in the A.I. branch of CS.

However, as I mentioned, the hardware, firmware, and drivers are all manufacturing details that you solve once and then sell forever. Anybody can buy a Lego or a Bioloids kit off the shelf now, and start working with motors. It's not like 2006, when the Fujitsu HOAP humanoid robot we were working with at JPL was a $50,000 custom-ordered special.

Most of what I consider the really interesting work starts by assuming the hardware and drivers have already been accomplished--and then, what do you do with the system? This is completely in the realm of software.

Robotic software control starts with 3D simulators, which in turn are based on forward kinematics; eventually inverse kinematics; dynamics, if you feel like it; and physics-engine simulations. Math here centers around locations [position + orientation], which are best represented by using [4x4] homogeneous coordinate transformation matrices. These are not very hard, and you can get a good background in them from any computer graphics textbook. Make sure you follow the religion of post-multiplying by matrices ending in a column vector on the right; this allows you to chain base-to-waist-to-shoulder-to-elbow-to-hand kinematics in a way that you'll be able to understand. Early textbooks proposed premultiplying using row vectors, because they thought it wouldn't make a difference. It does.

Of course the physics engines require a decent knowledge of physics.

Higher-level processing is accomplished using artificial intelligence, usually rules-based systems. Natural-language processing also can tie in linguistics and phonetics. Speech recognition and speech generation are again mostly signal processing, taught in EE and CS. Recent advances work on Big Data, which uses statistics, Bayesian reasoning, and bases vector spaces (from mathematics).

Robotics has not yet broken out. It is still at the level cell phones were at when Gordon Gecko was walking on the beach talking into a "portable phone" the size of a shoe. I don't see robots becoming ubiquitous before 2020. Around 2025, being a robot programmer will be in demand as much as being an app programmer is today. Study lots of A.I. Start early.

Good luck. I hope this helps.

State-of-the-art humanoid robot system design as of 2006 [short movie]: http://www.seqcon.com/caseJPL.html

Very high level block diagram of components [graphic]: http://www.seqcon.com/images/SystemSchematic640.gif

like image 128
DragonLord Avatar answered Sep 17 '22 13:09

DragonLord