[This post is part of the article Robotics With Lego Mindstorms.]
Imagine that you are blindfolded and had to feel your way around a room just using your hands. This is exactly how a robot can navigate itself if it had to rely only on touch sensors. In this case, the touch sensors are really like “feelers” that tell the robot if it is free to move or is pressing against an obstacle.
Now, imagine that you are blindfolded and have to sort balls of three sizes into three bins. You can do so by sensing how far apart your fingers are when you pick up a ball. It is possible for a robot to do the same thing using touch sensors in its fingers.
So what is a touch sensor?
Most touch sensors are very simple devices. In the case of NXT, it is really a push-button switch that lets the NXT controller know if the switch contact is made or broken. Using this, the NXT can figure out if the touch sensor is “pressed,” “released” or “bumped.”
How do you use it?
There are many ways in which you can use a touch sensor with a robot that you are building. If your robot is a humanoid like the AlphaRex, you could use a touch sensor on one of its hands, so that when you shook hands with it, it would say “Hello.” Or you could build a vehicle that uses two touch sensors, one on its each side. You can program such a robot to move forward, and if it hits an obstacle on one side, it could back off and move in the other direction. It is fairly simple to build such a robot that can find its way out of a maze.
These are just a couple of examples of what you could achieve with touch sensors. The next post (under development) will cover light sensors.