Partner, architect, or boss? We asked ChatGPT to design a robot and here’s what happened

Estimated read time 6 min read

Partner, architect, or boss? We asked ChatGPT to design a robot and here's what happened


European researchers are working on the design of a tomato picking robot. Adrien Buttier/EPFL


At a time when some are alarmed by the fact that artificial intelligence (AI) is pushing towards the extinction of the human species, we could imagine that the process of designing a robot by an AI is akin to the creation of the Terminator by Frankenstein. Or even the other way around.


But what would happen if, in a dystopian future or not, we had to collaborate with machines to solve problems? How would this collaboration work? Who would be the boss and who would be the employee?


Having ingested many episodes of Dark Mirror, as well as Arthur C. Clarke’s novel “2001: A Space Odyssey”, I would bet on the fact that the machine would be the boss.

“We wanted ChatGPT to design a robot that would be really useful”


However, a real experiment of this type conducted by researchers has yielded original results that could have a major impact on the collaboration between machine and man.


Professor Cosimo Della Santina and doctoral student Francesco Stella, both from TU Delft, and Josie Hughes from the Swiss technical university EPFL, conducted an experiment to design a robot in partnership with ChatGPT to solve a major societal problem. “We wanted ChatGPT to not just design a robot, but a robot that is actually useful,” Ms Della Santina said in a paper published in Nature Machine Intelligence.


This is how a series of question-and-answer sessions began, in order to determine what the two groups could design together. Large language models (LLMs) like ChatGPT are very efficient when it comes to processing huge amounts of text and data, and can produce coherent responses at lightning speed.


The fact that ChatGPT can do this with technically complex information makes it even more impressive – and is a real boon for anyone looking for a super-powerful search assistant.


Working with machines


When European researchers asked ChatGPT to identify some of the challenges facing human society, AI indicated the issue of ensuring a stable food supply in the future.


There followed a back and forth between the researchers and the robot, until ChatGPT chose tomatoes as the crop that the robots could grow and harvest – and, in doing so, have a significant positive impact on society.


Hand with tomato-picking robot next to tomatoes


ChatGPT made useful suggestions on how to design the pliers, so that it can handle delicate objects like tomatoes. Adrien Buttier/EPFL


This is an area in which the AI partner has been able to bring real added value. Come again? By making suggestions in areas such as agriculture, where his human counterparts had no real experience. Choosing a crop with the greatest economic value for automation would otherwise have required tedious research on the part of scientists.


“Even though Chat-GPT is a language model and its code generation is text-based, it provided significant ideas and intuitions for physical design, and showed great potential as a sounding board to stimulate human creativity,” said Hughes, from EPFL.


The humans were then tasked with selecting the most interesting and appropriate directions to pursue their goals, based on the options provided by ChatGPT.


Intelligent design


But it is by finding a way to harvest tomatoes that ChatGPT is really good. Tomatoes and similar delicate fruits – yes, tomato is a fruit, not a vegetable – pose the greatest challenge when it comes to harvesting them.


AI gripper next to tomatoes


The clamp designed by the AI at work. Adrien Buttier/EPFL


When asked how humans could harvest tomatoes without damaging them, the robot did not disappoint and proposed original and useful solutions.


Aware that any part coming into contact with the tomatoes had to be soft and flexible, ChatGPT suggested using silicone or rubber. ChatGPT also indicated that CAD software, molds and 3D printers were ways to build these flexible artificial hands, and he suggested a claw or ball shape as design options.


The result is impressive. This collaboration between AI and humans has made it possible to design and build a functional robot capable of picking tomatoes with dexterity, which is no small feat considering the ease with which they are damaged.


The dangers of partnership


This unique collaboration has also introduced many questions, which will become increasingly important in the context of a human-machine design partnership.


A partnership with ChatGPT offers a truly interdisciplinary approach to problem solving. However, depending on how the partnership is structured, you could achieve different results, each with substantial implications.


For example, the LLMs could provide all the details necessary for the design of a robot, while the human would be content to act as the executor. In this approach, AI becomes the inventor and allows the layperson to engage in robotic design.

Lack of control on the part of humans


This relationship is similar to the experience that the researchers had with the tomato picking robot. Although they were stunned by the success of the collaboration, they noticed that the machine did a lot of the creative work. “We have found that our role as an engineer has shifted to more technical tasks,” said Stella.


This lack of control on the part of humans is the source of dangers. “In our study, Chat-GPT identified tomatoes as the crop that most deserves to be exploited by a robotic harvester,” said Mr Hughes. “However, it may be that this result is biased in favor of cultures that are more covered by literature, as opposed to those for which there is a real need. When decisions are made outside the scope of the engineer’s knowledge, this can lead to important ethical, technical or factual errors.”


And this concern, in a word, is one of the serious dangers of using LLM. Their seemingly miraculous answers to the questions are only possible because they were fed a certain type of content and then asked to regurgitate parts of it.

Are you going to entrust the design of a robot to a machine that hallucinates?


The answers essentially reflect the bias – good or bad – of the people who designed the system and the data provided to it. This bias means that the historical marginalization of certain segments of society, such as women and people of color, is often replicated in LLMs.


And then there is the problem of hallucinations in LLMs. Here, the AI simply makes things up when confronted with questions to which it has no easy answers.


There is also the increasingly thorny problem of the use of proprietary information without authorization, as several lawsuits filed against Open AI show.


Nevertheless, a balanced approach – where LLMs rather play a supporting role – can be enriching and productive, making it possible to forge vital interdisciplinary links that could not have been encouraged without the bot. That said, you’ll need to engage with AIS the same way you do with your children: diligently check all homework and screen time information, especially when it seems casual.


Source: “ZDNet.com “

You May Also Like

More From Author