Could you imagine living with robots ? The idea may seem rather fanciful. But, according to Stowe Boyd, lead researcher at Gigaom research “The central question of 2025 will be: What are people for in a world that does not need their labor, and where only a minority are needed to guide the ‘bot based economy.” Beyond traditional industrial automation and advanced robots, new generation of more capable autonomous systems are appearing in environments ranging from autonomous vehicles on roads to automated check-outs in grocery stores.
Much of this progress has been driven by improvements in systems and components, including mechanics, sensors and software. AI has made especially large strides in recent years, as machine-learning algorithms have become more sophisticated and made use of huge increases in computing power and of the exponential growth in data available to train them. The field of autonomous projects is being explored extensively by researchers. As such vast possibilities are associated with it, its projects can range from highly complex to exorbitant mega projects. In this article I have tried to provide you with project topics which are exciting but at same time very feasible and plausible.
Exciting autonomous projects
Autonomous robots can act on their own, independent of any controller. The basic idea is to program the robot to respond a certain way to outside stimuli. These four projects are aimed to provide you with basics of autonomous projects. You can further use these principles to develop projects related to other mentioned topics.
Self balancing robot
This s a two-wheel self-balancing robot. It is quite similar to segway which are a novel and cool mode of travel for short distances. There self balancing capability makes these compatible vehicles extremely convenient and easy to drive. A self balancing robot is a basic and driver less version of it.
For this project it is advisable to use six motion sensors—three gyros and three accelerometers—all integrated into one breakout board. The specific need here is to measure angular position or gyro of the wheel’s axis. But gyro readings drift over time. Hence requiring recalibration quite often. Therefore, to get correct angular position, gyro readings are corrected with the help of a neighbouring accelerometer. Once the angular position is achieved, traction motors push the cart towards the direction of falling. The greater the angle of shift, the greater the speed with which the traction motor pushes the cart. As the angle of shift wrt vertical position reduces to zero, the speed reduces. Thus, the top of the cart moves like a pendulum, maintaining the balance.
Self driving robotic vehicle
Self driving robots are very extensively used now days as exploratory robots that operate on other planets. As a result they are required to analyze and adapt to unfamiliar environments, even to areas with rough terrain. A rover robot, for example, might construct a map of the land in front of it based on its visual sensors. If the map shows a very bumpy terrain pattern, the robot knows to travel another way. This project can be build by incorporating a framework on top of ROS and implementing behavioural cloning task of the first term. Two components for this project are hardware and software platform.
Hardware is based on the open source team called donkeycar and we can build software on top of ROS so that communication between different nodes are easy in ROS. Robot Operating System (ROS) is robotics middleware (i.e. collection of software frameworks for robot software development). Although ROS is not an operating system, it provides services designed for a heterogeneous computer cluster. In ROS, ROS Master holds the information about nodes and all the nodes should registered with the master to publish/subscribe messages. Nodes are individual components which does the particular tasks. For example, camera node is responsible for capturing the image.The architecture is distributed across different devices and all of them controlled by one ROS master. Raspberry pi can be used to hold the camera node and actuator node which capture the image and control the car respectively.
Autonomous drones are being used for missions too “dull, dirty or dangerous” for humans. While they originated mostly in military applications, their use is rapidly expanding to commercial, scientific, recreational, agricultural, and other applications.
For a drone to fly autonomously, all the necessary sensors, processing power, and communication chips must be built-in. Drones can be navigated using GPS, as drones have received a lot of attention recently. GPS is handy and very simple to access since it is digital. To be called “smart,” your drone must have enough embedded processing capabilities to, for example, capture a video and analyze in real time such targets as QR codes (easy), shapes, or movements (difficult). You can even measure volumes and rebuild a space in real time as was done with the MIT UAV.
Autonomous vaccum cleaner
Here is an automated vacuum cleaner robotic system that allows for automatic cleaning of a particular area or room by covering the area using border analysis. The robotic system can follow a zigzag path to cover entire room.The system must use ultrasonic sensors for boundary sensing and operate accordingly in order to cover entire room. Further system also should have a vacuum suction cleaner attached to its back for dust suction. For this system we require a microcontroller based circuit system in order to monitor ultrasonic sensors as well as operate LCD display and control robot movement at the same time. The system can be designed to detect one corner of room and start from there and then activate vacuum cleaner motor in order to start the suction system.
Autonomous chess playing robot
This project aims to implement a robotic arm that can play chess with humans on a normal chess board just as humans do. Lego Mindstorms kit can be used to implement the mechanical part of the robot. Servo motors will have to be placed at each of the hinges to control the motion of the arms and another one will be used to control the gripper to move the pieces. To calculate moves you can use an open source chess program called Chessterfield. Webcam is used to capture images of the chessboard and then using image processing (OpenCV) to decipher moves made by the human player. Since identification of black pieces on black squares is difficult due to limitations of computer vision, it is better to compare states before the move and after the move to detect which pieces have moved.
Automated refueling robotic arm
Autonomous robot will soon be refueling your vehicle, doing the work faster and reducing the manpower required at filling stations. To create a an automated refueling robot arm, you need two things. First of all you need a robotic arm obviously, which is a mechanical system. Secondly you need a positioning system, which sense’s the inlet of fuel tank. Position system can be implemented using a 3D time of flight camera. It searches the position of the adapter, which is attached to the fuel tank of the truck. Infrared light sent by the camera is projected on the adapter, reflects back to the camera and is received by the camera. Based on the angle of incidence of the light on the CCD and the time lapse between sending and receiving the light, a 3D image can be generated. Subsequently vision algorithms needs to be used to find the exact position.
Some more related topics
- Autonomous wall painting robot
- Autonomous surveillance robot
- Automatic scrap collecting robot
- Autonomous drone for farming application
- Autonomous fire extinguishing robot
- Automatic rubik’s cube solver
- Autonomous surface vehicle
- Autonomous human following robot
- Self driving rovers
- Autonomous welding robot
We have a range of courses for you where you can learn about the latest technologies. Also, get an opportunity to build fabulous products and release it to the whole world.