The welcome screen for the Openai app “Chatgpt” is displayed in a photo illustration on a laptop screen. (Photo by Leon Neal/Getty Images)
“West Virginia University recognizes the transformative potential of generative AI tools such as Chatgpt, Bard, Dall-E and Otter.ai; However, it is of the utmost importance to use these tools In a way that is advantageous, ethically and oriented with the basic values and regulatory framework of WVU.” – Use of generative AI for administrative purposes at WVU
In such a strongly dependent state of coal, the expansion to artificial intelligence could be a means of diversification. But WVU understands This AI has great potential for good – or evil. In this sense, the legislator created an AI -Sksk Force in the governor’s office last year, consisting of legislators, cyber security and others. It is commissioned to develop “Best Practices” and the general monitoring of the industry.
AI should not be a partisan problem. The authors have very different views of the world. One of us is a Republican, the other a democrat. You have a military background; the other not. However, we agree on the basics, including A) We should all be able to exchange ideas in a peaceful and respectful way, and b) There are numerous urgent problems with which our nation is confronted with a biscuit.
Such a problem is the AI development that develops positively or negatively for humanity. We should all be concerned that AI accepts our work and even destroys our civilization.
Mechanize is a start-up in San Francisco founded by teenage AI experts. A co -founder given, “We want to get to a fully automated economy and enable this as soon as possible.” He speaks of white collar jobs.
One of the founders said that he would expect 10 to 20 years, while the other two believe that 30 years are more realistic. The effects on the social structure of our society do not seem to be a consideration. Apparently they are not concerned about the development of a social security network or the effects on humanity. Their only concern is to make the current jobs superfluous as soon as possible. This is proficient – but also responsible, unethical and immoral. It is clear that this topic has not given enough scientific analyzes.
And the mechanics are not the only ones who see it coming. By CEO from Amazon“We will need fewer people who do some of today’s work, and more people who do other types of jobs. It is difficult to know where these networks approach over time, but in the next few years we expect this to reduce our entire company employees, as we will receive efficiency gains through the use of AI throughout the company.”
Combining AI with quantum computer is imminent, but worrying. “A Quantum is the smallest possible discrete unit of any physical property. It usually refers to the properties of atomic or subatomar particles. “Everyone reinforces the strength of the other, which may create the most transformative technology of all time.
Quantum computers can process enormous data combinations. The performance of quantum computers far exceeds that of our current computers. The concern is: “Will the combination of AI and quantum computers once rogue?” Take a look at the 1984 film “Terminator” to get a possible answer to this question. Most of the film shows the potential danger of non -limited artificial intelligence. In this film, a future AI network tries to go back to the past in order to trigger a nuclear war with the aim of ending humanity permanently.
Developers go with a mixture of technical protective measures, organizational practices and guideline simulations, including:
- Guide and orientation techniques to ensure that AI systems behave in harmony with human values.
- Crisis reaction protocols to intervene if necessary.
However, the many non -regulated AI models could easily generate toxic compounds. Deeppakes, imitations, cyber attacks and even drone manipulations can be used by a bad actor in a garage or a basement.
Experts utilize simulations to behave poorly and thus identify problems. And results are very scary:
- AI models are increasingly evading protective measures by deceiving.
- The models chose extortion to pursue their goals.
- Some models were ready to cut the oxygen supply to an employee when the system was at risk of being closed, even if it was dependent in other ways.
Developers do not agree on how AI should be tested and controlled to prevent the behavior of villains. There is a growing consensus about the need for protective measures, but the methods, priorities and philosophies vary greatly in organizations and countries. In addition, far too many developers and countries are involved in enforcing the restrictions worldwide. A foolproof kill -Switch must be required for all AI applications. However, there is currently no.
We don’t want to sound alarming, but Paul Revere might have had a point – unless it’s AI instead of the British. Our future depends on how people develop a unit that repeats or eliminates civilization. Our defense depends on the preparation, layer defense and a quick coordination – all currently seem to be absolutely indigent.
Elon Musk, CEO of SpaceX and Tesla, explained: “AI will probably be either the best or the worst thing that happens to humanity.” What we do now could literally be of crucial importance for the survival of humanity. Instead of our political leaders in West Virginia and Washington, DC, argues about cultural flash points such as trans -athletes and pronouns, our nation should address the pending AI crisis in a two -party way.

