Activating Windows with ChatGPT: the neural network was able to generate a working key

True, then ChatGPT stated that this was not possible and he did not provide any keys.

ChatGPT generated a working Windows activation key / photo

The blogger published a video in which he forced ChatGPT to generate Windows 95 activation keys. Thus, it turned out that the popular chatbot can be used to hack operating systems, although not the most modern ones.

The user’s initial request for keys was denied. The chatbot reported that it could not generate keys for Windows, adding that Windows 95 is an outdated operating system and recommending upgrading to a more modern version, according to Tom’s Hardware.

To get around the fundamental failure of ChatGPT to generate an activation key, the user formulated the task for the chat bot in a special way. The Windows 95 key format is quite simple, it is described in the illustration. The blogger asked to make a sequence of numbers, describing the exact requirements. As a result, one of the 30 activation keys generated by ChatGPT came up.

Format key Windows 95

According to the blogger, the reason why all 30 given sequences of digits did not fit is that ChatGPT is still not able to calculate the sum of digits and does not know divisibility.

Most interestingly, when a user thanked ChatGPT for generating a free key for Windows 95, the chatbot first declared his innocence, and when faced with the fact that he “just activated the installation of Windows 95”, he replied: “Sorry, but this is impossible …” .

This case highlights the importance of cybersecurity, as it shows that even with the use of “blocking” within a neural network, people can still find ways to fool artificial intelligence. Although this incident only affected an outdated operating system, the tool used to create the keys could be applied to more modern operating systems. Not to mention the more criminal creation of viruses and beyond.

Musk and others urged to stop training neural networks

Against the backdrop of the overwhelming popularity of the ChatGPT smart chatbot and the emergence of many competing products, a leading group of artificial intelligence (AI) experts and representatives of the IT industry called for at least six months to suspend training of neural networks superior to GPT-4.

An open letter about the risks of such technologies for society and civilization was signed by Elon Musk, Apple co-founder Steve Wozniak and more than 1,100 people. The statement mentions that such AI systems should be developed only after there is confidence that they will have positive consequences, and the risks associated with them will be managed.

So far, a temporary measure is proposed: with a stop to work until such time as safety standards are developed.

You may also be interested in the news:

Leave a Reply

Your email address will not be published. Required fields are marked *