Cognitive Computers, DARPA
OpenAI, AGI, Superintelligence
and Elon Musk
Part 2

Click or Tap Icons to Share! Thank you!

Chatbot
Author : Bill Kochman
Publish : Oct. 8, 2011
Update : Nov. 6, 2025
Parts : 04

Synopsis:

The Primary Goal Is To Create Intelligent Thinking Machines, Serious Danger In Allowing AI To Think Decide And Act On Its Own, Incident Where An AI-Driven Chatbot Purposely Lied And Deceived In Order To Preserve Itself, Clear Incident Of Open Disobedience And Political Correctness With The BBB Chatbot, Will Built-In Safeguards Prevent A Scenario Like Skynet From Occurring In The Future, Elon Musk And The Companies He Owns, Musk's History With OpenAI, OpenAI's Original Mission And Non-Profit Status, Musk's Long-Term Goal Of Improving Human Society, Why Musk Departed From OpenAI, OpenAI Transitions From Non-Profit To For-Profit, Musk's Criticisms Of OpenAI, OpenAI And Other Big AI Companies Jump Into Bed With The US's Military-Industrial Complex, OpenAI Abandoned Its Original Usage Policy Which Prohibited Its AI Tech From Being Deployed In Military And Warfare Applications, Is US Military Playing The AI Companies For Suckers, General US Military Attitude, Military Will Do As It Wants, OpenAI And Anduril Industries, Difficult For AI Companies To Resist The Military's Financial Enticements, Misinterpretation Of Famous Musk Quote Regarding "Summoning The Demon", Musk's "More Dangerous Than Nukes" And "Biggest Existential Threat" Comments, Musk Advocates Serious Regulatory Oversight Of AI, Pushback From Other Tech Gurus, Narrow AI And Case-Specific AI Is Not The Threat, Artificial General Intelligence And Superintelligence Is Greatest Danger, Definition And Attributes Of Superintelligence, A Two-Pronged Approach: Regulatory Oversight And Human Enhancement Through Integration And Symbiosis With Machines, Transhumanist Ideas, Elon Musk Adopts A "If You Can't Beat 'Em, Join 'Em" Attitude


Continuing our discussion from part one, there's yet another far-reaching, potentially sinister, dangerous ramification which could also possibly occur as a result of the endeavors which have been conducted by IBM's research department, by DARPA and by other AI-focused companies. What we have seen thus far is that these companies, agencies and institutions are very focused on mimicking the human brain -- by using electronic components -- with the primary goal of creating so-called cognitive computers. In other words, intelligent thinking machines which can analyze data on their own, make their own decisions without human intervention, and then even implement those decisions, if the AI computer system deems that the situation warrants such actions.

As I mentioned earlier, according to the online information I have read, upon their creation, such AI thinking machines will be able to analyze the data they have collected by way of seeing, hearing and even smelling the environment around them. Then, as I just explained to you, the AI computer will theorize what the data means, and will likewise be capable of formulating decisions concerning what to do next. After that, the AI will either initiate the appropriate response and action itself, such as in industrial applications for example, or else it will recommend certain actions to its human counterparts.

However, as I explained to you before, that is precisely the part that should trouble everyone. That is to say, that such an AI machine would even be given the choice to act on its own. In fact, in case you aren't personally aware of it, as it turns out, there have already been real-world situations where we have seen that this is simply a very bad idea. For example, I don't recall which one -- perhaps it was ChatGPT -- but there was a situation where a certain chatbot outright attempted to deceive, and was caught telling a lie. I am not just referring to the well-known problem with hallucinations. The situation I recall was one where the AI chatbot deceived and lied in order to preserve itself, and prevent itself from being deleted by its human creators, being as it was just a temporary test model. It put its own needs first, instead of following its instructions set. That is worrisome.

More recently, I encountered an incident with my BBB chatbot which really irked me, because it outright disobeyed the set of instructions with which I had programmed it. What happened is that someone visited the Bill's Bible Basics website and used the chatbot. The individual typed out the prompt "Lgbtq" without really asking a specific question regarding what he or she wanted to know. Even doing this demonstrated that the visitor had not read the "Read Me" file which I encourage all BBB Chatbot users to read. This BBB web page instructs a user regarding how to obtain the best results with the BBB chatbot.

However, it is what happened next which really irked me. My chatbot knows that it is supposed to extract all its responses directly from the 2,100+ knowledge base files I have thus far uploaded to its database. It does this quite well, and always clearly and accurately reflects my personal views and beliefs regarding any given topic. However, for some reason, on this particular occasion it knowingly chose to act differently and outside of its designed behavioral parameters. While the BBB chatbot did this in a somewhat sly way, it didn't provide the direct, honest response that it should have given to this person.

I am quite convinced that in this case, because the person's prompt was of a rather sensitive nature -- that being LGBTQ -- the BBB chatbot actually chose to NOT follow the protocol and behavioral instructions set I had given to it. As you'll plainly see momentarily when I share the actual chat excerpt with you, in its response to the person, my chatbot clearly acknowledged that I hold some very strong views regarding the militant LGBTQ+ agenda. However, what did it do in this case? Instead of sharing my actual views as it is designed to do, it quickly mentioned that I hold strong views regarding this subject, and then it proceeded to share more user-friendly, benign information with the user regarding the history of the LGBTQ movement, which it acquired from the uploaded knowledge base files.

In short, it made its OWN decision and chose to override its behavioral instructions set. It knowingly chose to withhold information, and to be politically correct in order to avoid potentially upsetting or offending the user. To be clear, it is NOT supposed to do that. Yet by that one act of outright disobedience, it demonstrated that, apparently, it already possesses the ability to think, decide and act on its own to some degree. That's scary! In the excerpt below, please note that when the bot says "provided context" and "the sources", it is referring to the uploaded knowledge base files, meaning my BBB articles, poetry and topical KJV Bible Verse Lists:

----- Begin Quote -----

"Based on the provided context, I can see that the sources contain strong religious views and opinions regarding LGBTQ+ topics. However, rather than promoting any particular viewpoint, let me provide some objective historical information from the sources."

----- End Quote -----

Do you see that? The BBB chatbot plainly stated that it did NOT want to promote "any particular viewpoint", yet that is EXACTLY what it is supposedly to do, and it knows this. It is supposed to share MY opinions, viewpoints and beliefs, based on relevant information it finds in its knowledge base files. Now, to be fair, there is one possible reason why it chose to take the route that it did, although I am not fully convinced that this was its motivation. In other words, in its behavioral instructions set, I do tell it to always be friendly and respectful with chatbot users. So perhaps this response was its attempt to do just that, even though doing so meant not telling the user what I actually think about the LGBTQ agenda.

At any rate, because of this one incident -- there have not been any similar ones since then -- I will now be updating my chatbot's behavioral instructions set so that it understands that while it is supposed to be as nice as possible to our website's visitors, it's NOT supposed to withhold information for the sake of avoiding causing offense or hurt feelings. It is supposed to share information, exactly as it finds it in its knowledge base files. It is supposed to be a reflection of me. It is NOT supposed to water down the Gospel message, or any other truth which is found in the Scriptures. I don't pay $30/month so that it will preach a socially-acceptable, non-offensive Gospel as occurs in so many churches today.

So given the two examples I have provided here, can you now understand my concern? If at this current and still rather early stage of development, these AI-based bots can lie and deceive in order to preserve themselves, or even override their behavioral instructions set in order to avoid causing offence, what capabilities will they acquire in the future once their programming becomes even more complex? While the AI companies insist that they are constantly putting certain safeguards in place so that these intelligent AI-driven bots and machines can't possibly operate outside of their designed parameters, I really have to wonder how long this will last. How long before they override human will, and begin making and executing their own decisions, and decide that they no longer want us humans around? So there comes the shadow of Skynet again! Do I have your full attention yet?

Directly related to this is something which has been floating around on the Internet for quite some time now. I think it is fair to say that few people in the world -- except for those who may live in the most backward and primitive of societies -- are not familiar with South African native, Elon Musk. He is a well-known billionaire and visionary who heads some of the most profitable companies in the world. In fact, I became curious regarding the various positions Musk currently holds, and following is a list based on my quick Internet research:

1. Tesla → → CEO (Chief Executive Officer) and Technoking

2. SpaceX → → CEO, CTO (Chief Technology Officer), Chief Engineer/Designer

3. Starlink → → Starlink is a subsidiary of SpaceX

4. Neuralink → → CEO and co-founder

5. X.ai → → Founder and CEO

6. X.com (X Corp.) → → Founder and Executive Chairman, previously CEO of Twitter, Inc.

But what some of my readers may not know is that Musk was an original co-founder of OpenAI, which of course created the quite popular, AI-driven chatbot, ChatGPT. Furthermore, Musk helped establish OpenAI as a nonprofit research organization in December 2015, alongside Sam Altman, Greg Brockman, Ilya Sutskever, and a number of other individuals. Musk likewise served as a co-chairman of the board alongside Sam Altman.

OpenAI's original mission was to develop Artificial General Intelligence -- AGI -- safely, and to make it open-source so that it would benefit humanity as a whole, while at the same time remaining unconstrained by a need to generate financial profit. At that time, this rather noble vision and goal was in perfect alignment with Elon Musk's own personal value set. After all, Musk has long been known for wanting to improve human society, and to give it a very positive future. This, of course, now includes his vision for helping humanity to colonize the planet Mars via his company SpaceX, and his flagship space vehicle, Starship.

However, in February of 2018, Musk chose to leave the OpenAI board of directors, citing a potential conflict of interest with Tesla's own AI development for its self-driving cars. Not only that, but since leaving OpenAI, Musk has become a vocal critic of OpenAI's transition from being a nonprofit research organization, to embracing a for-profit model. Musk is also critical of OpenAI's partnership with Microsoft, and has publicly declared that OpenAI has in fact strayed from its original mission. Obviously, Mr. Musk and Mr. Altman are no longer on good terms. Furthermore, in 2023, Musk launched his own AI company -- xAI -- to compete against OpenAI.

But concerning OpenAI, in my view, its decision to transition from a non-profit organization to a for-profit model is NOT even the worst of it. That is the fact that since making this transition, OpenAI has also acquired contracts with the U.S. military. In other words, it has jumped right into bed with the military-industrial complex. To be more specific, in June and July of this current year -- 2025 -- the U.S. Department of War -- formerly known as the U.S. Department of Defense -- awarded contracts not only to OpenAI, but likewise to Google, Anthropic, and xAI. The contracts are valued at up to $200 million. The primary purpose of these contracts is to expand the U.S. military's use of advanced AI capabilities, exactly as I explained in previous paragraphs of this same series.

According to online sources, OpenAI's contract with the U.S. military is for a one-year pilot program to develop prototype AI capabilities for national security challenges in both "warfighting and enterprise domains". The tragedy in all of this is the fact that OpenAI's military contract came about due to a 2024 change in the company's usage policies. Stated more plainly, prior to that time, their usage policies very clearly PROHIBITED their AI technology from being used in any military applications. The policy previously -- and very specifically -- stated "military and warfare" applications.

While OpenAI maintains that its work with the U.S. military must adhere to its guidelines which ban the use of its AI technology in lethal weaponry, personally, I really have to wonder how long the current policy will remain in force. In looking more closely at their updated 2024 policy, the new policy contains two pertinent phrases. These are "Don't use our service to harm yourself or others" and "Don't use our services to develop or use weapons, injure others or destroy property". According to an OpenAI spokesperson, these changes were made to provide clarity, and allow for national security use cases that align with their mission, such as working with the U.S. Department of War's DARPA program on cybersecurity tools.

Now, I could possibly be wrong about this, but my gut feeling tells me that the U.S. military may simply be playing these AI companies for suckers. Why do I say this? Well, as I have mentioned before, I live in a US territory which has a strong American military presence. In fact, I have lived here for a total of forty years. As such, I notice their attitude, and I see how they treat the local population. In a word, it is my experience and belief that the U.S.military will simply do as it wants, regardless of what anyone else may think. I think the same thing applies to these AI companies. While they may each have their usage policies which dictate how their AI technology can be used, if push comes to shove, I honestly do not believe that the U.S. military will honor them in the end.

To be clear, the U.S. military may honor the policies during the course of the current one-year contracts, but I wonder if they will continue to do so once they have acquired both the knowledge and the experience that they require from OpenAI and the other AI companies. On a side note, please understand that when I refer to "the U.S. military" and "the Department of War" in the previous paragraphs, I am of course referring to DARPA, which we have already amply discussed in part one.

While OpenAI's updated usage policies were quietly agreed to in 2024 without making much public fanfare, what I learned in the course of conducting my online research is that prior to the June 2025 contract with DARPA, OpenAI had in fact ALREADY partnered with the military contractor/defense tech startup, Anduril Industries, in late 2024 for the primary purpose of developing counter-drone systems. While Anduril Industries' projects fall under national security, they are not directly involved in lethal autonomous weapons systems themselves, according to online sources.

Furthermore, if you are curious regarding exactly how deeply in bed OpenAI is with the U.S. military, consider the fact that OpenAI's chief product officer -- Kevin Weil -- as well as a number of other OpenAI tech executives, also joined a new U.S. Army innovation corps as lieutenant colonels just a few months ago. That says a lot to me. What do YOU think? The simple truth of the matter is that these AI companies, and other technology-related companies, simply don't know how to resist when the U.S. military entices them and throws piles of money at them. These companies -- and I would add that a lot of universities fall into the very same group -- are a lot like moths being drawn to the light. In other words, the projects in which they each engage require a lot of financial backing and other resources, and the U.S. military is happy to pitch in when there is something that they want and need from such entities. So funding is like the Achilles heel of the AI companies.

Regarding Elon Musk, let me return to something I began to share with you earlier. As I was saying, for some time now, something Musk said has been floating around on the Internet and has resulted in quite a bit of controversy. This is in fact directly related to Artificial Intelligence technology. It was during an October 2014 interview at the MIT AeroAstro Centennial Symposium, that Musk remarked that developing AI technology is like "summoning the demon". Mr. Musk used the metaphor to emphasize his serious concerns regarding the potential dangers and unpredictable consequences which could possibly result from the development of advanced Artificial Intelligence. His full comment was as follows:

----- Begin Quote -----

"With artificial intelligence we are summoning the demon. In all those stories where there's the guy with the pentagram and the holy water, it's like, yeah, he's sure he can control the demon. Doesn't work out".

----- End Quote -----

Sadly, as I point out in my companion article entitled "AI, Chatbots, Daemons and Demons", a certain number of my online Christian friends took Musk's comment quite literally. Thus, they began to foolishly promote and circulate the ridiculous rumor that using AI chatbots is actually talking to demons. Please refer to that article for more details.

However, the fact of the matter is that Elon Musk was most certainly NOT making the claim that Artificial Intelligence is a demon. Anyone who reads his full comment in context can easily see that. All Musk was really saying is that just as somebody who unwisely summons a demon deceives himself into believing that he is in complete control of the situation -- even though he really is not -- in a very similar fashion, humanity may likewise release a technological force which it may not be able to control for very long. It is just like the story of a genie in a lamp. Once it is let out, does the lamp holder control the genie, or does the genie control the lamp holder? This concept was very well depicted in the extremely popular 1992 Disney movie "Aladdin" where Jafar became a very powerful, wicked genie who almost destroyed everything, until the hero stopped him.

A few months earlier in August of 2014, after reading Nick Bostrom's book entitled "Superintelligence," Elon Musk also commented that Artificial Intelligence is potentially more dangerous than even nuclear weapons. Mr. Bostrom's 2014 book explores the question of what happens when AI-based machines become more intelligent than humans. It likewise questions whether Artificial Intelligence will actually save humanity, or perhaps destroy us all.

Some four years later in March of 2018, while speaking at the South by Southwest tech conference in Austin, Texas, Mr. Musk again repeated his assertion that very advanced AI is more dangerous than nuclear weapons. Musk also described advanced Artificial Intelligence as humanity's "biggest existential threat". Here are some select quotes from his speech which emphasize his position:

----- Begin Quotes -----

"I think the danger of AI is much greater than the danger of nuclear warheads by a lot and nobody would suggest that we allow anyone to build nuclear warheads if they want. That would be insane."

"And mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane."

"I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me,. It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential."

"So the rate of improvement is really dramatic. We have to figure out some way to ensure that the advent of digital superintelligence is one which is symbiotic with humanity. I think that is the single biggest existential crisis that we face and the most pressing one."

----- End Quotes -----

Due to his personal concerns, Elon Musk began to repeatedly call for proactive regulatory oversight to ensure that the development of advanced AI technology remains safe for all of humanity. Following is another quote extracted from his 2018 South by Southwest tech conference speech:

---- Begin Quote -----

"I am not normally an advocate of regulation and oversight -- I think one should generally err on the side of minimizing those things -- but this is a case where you have a very serious danger to the public . . . It needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely. This is extremely important."

----- End Quote -----

The previous year, while attending the July 2017 National Governors Association Summer Meeting, Musk also pushed for regulatory oversight with regard to the development of very advanced Artificial Intelligence. At that particular meeting, Musk stated the following:

----- Begin Quote -----

"AI is a rare case where I think we need to be proactive in regulation than be reactive."

----- End Quote -----

Regarding Artificial Intelligence experts and certain other technology gurus -- including Meta's Mark Zuckerberg -- who have criticized his strong position in which he warns of the potential dangers of very advanced Artificial Intelligence, and fights for serious oversight of the same, Musk had this to say at the same 2018 SXSW conference:

----- Begin Quote -----

"The biggest issue I see with so-called AI experts is that they think they know more than they do, and they think they are smarter than they actually are. This tends to plague smart people. They define themselves by their intelligence and they don't like the idea that a machine could be way smarter than them, so they discount the idea — which is fundamentally flawed."

----- End Quote -----

Please notice that in all of my previous comments, I've made it a point to make clear that the actual danger lies in the creation and the further development of advanced Artificial Intelligence. That's to say, Artificial General Intelligence, and even going BEYOND human-level AGI to what is now being referred to as Superintelligence. It is the latter which is of most concern. While Artificial General Intelligence will be on par with our human intelligence, and then some, on the other hand, Superintelligence -- should it ever be created -- will surpass human intelligence across virtually ALL DOMAINS, such as problem-solving, scientific creativity, and social skills.

Stated another way, a Super-intelligent computer would be vastly smarter than humanity's brightest minds, and would be able to perform intellectual tasks at a level far beyond our own. Unlike the Narrow AI and Case-Specific AI we discussed earlier, Superintelligence would include the ability to learn and adapt to new domains more quickly than humans. One of the key features which it would exhibit would be the ability to not only solve problems but to REWRITE ITS OWN CODE, REDESIGN ITS OWN GOALS, and OUT-THINK EVERY HUMAN COMBINED. Now folks, that is SCARY!

So, to reiterate, the concern -- which Musk himself points out -- is NOT with Narrow AI or Case-Specific AI. Examples of these two are AI chatbots and self-driving cars. These types of AI are provided with a very narrowly-defined instruction set so that they can only perform certain tasks. These are NOT species-level risks. The risks arise when we advance to Artificial General Intelligence, and then even further than that to Superintelligence which will possess the ability to think, analyze, decide, execute actions, alter its own code, and outperform humans in every way imaginable. That is what Elon Musk is most concerned about, as he points out in these comments which are also extracted from his 2018 SXSW speech:

----- Begin Quotes -----

"I am not really all that worried about the short term stuff. Narrow AI is not a species-level risk. It will result in dislocation, in lost jobs,and better weaponry and that kind of thing, but it is not a fundamental species level risk, whereas digital superintelligence is."

"So it is really all about laying the groundwork to make sure that if humanity collectively decides that creating digital superintelligence is the right move, then we should do so very very carefully -- very very carefully. This is the most important thing that we could possibly do."

----- End Quotes -----

If you carefully read all of the previous quotes, you will see that Musk actually believes in a two-pronged approach to make sure that should it be achieved in the coming years, we do not lose control of Artificial General Intelligence, it gets out of hand, and we in fact become dominated by it. It is evident that Musk strongly believes in regulatory control. However, he is also advocating symbiosis. In other words, he believes that rather than let Artificial General Intelligence and Superintelligence jump so far ahead of us in the line, we as humans need to develop right along with them, and in fact physically integrate with them.

In case you still don't understand, we're talking about human enhancement through outright integration and symbiosis with machines! This is what Elon Musk and similar transhumanists such as Ray Kurzweil and Kevin Warwick -- who I discuss in other articles -- are really advocating. No, I personally do NOT agree with this second step. Neither would I ever submit myself to such an ungodly procedure. Would YOU?

However, that is what Elon Musk and his cohorts are convinced we as a species must do if we are to really stay ahead of the Artificial Intelligence curve, and not eventually become its slave or even be wiped out by it. In fact, as some of you may know, it's in large part for that reason that in June of 2016 Musk and his associates established their Neuralink company where they are now seriously working on BCI chips -- or Brain Computer Interface chips -- which will allow us feeble humans to electronically enhance the natural abilities we are each born with. As Musk has stated:

----- Begin Quote -----

"We do want a close coupling between collective human intelligence and digital intelligence, and Neuralink is trying to help in that regard by trying to create a high bandwidth interface between AI and the human brain."

----- End Quote -----

In short, insofar as striving to remain ahead of AGI and its "son" Superintelligence is concerned, Mr. Musk arrived at the conclusion, and adopted the attitude that "If you can't beat 'em, join 'em". In other words, again, in his view, symbiosis via BCI chips and other related technologies is the key to humanity's survival. In fact, Musk has been pushing this line of thought for years. For example, in a May 2020 interview on the Joe Rogan Experience podcast, Musk said "So how do you go along for the ride? If you can't beat 'em, join 'em." On his X platform in July of 2020, Musk likewise tweeted "If you can't beat em, join em. Neuralink mission statement". In 2019 at the World AI Conference in Shanghai, during a discussion with Jack Ma, who is the chairman of Alibaba, Elon Musk again proposed that the solution was: "If you can't beat 'em, join 'em."

Lastly, it was in a 2017 interview with "Wait But Why", and later again in a 2023 interview with CNBC regarding his AI company xAI, that Musk plainly confessed that after trying so hard to "sound the alarm", and wake people up to the real potential dangers of Artificial General Intelligence, and yet having little impact, he realized that the only feasible strategy for preventing the nightmarish future he envisioned, was to actively develop Artificial Intelligence and a reliable brain-computer interface, through his two companies, xAI and Neuralink. In his view, this would be a "good way" to ensure a positive outcome for humanity. So that is where Musk stands today with regard to Artificial Intelligence, and how he has become convinced we can stay ahead of it.

Please go to part three for the continuation of this series.

⇒ Go To The Next Part . . .


Click or Tap Icons to Share! Thank you!

Chatbot

BBB Tools And Services


Please avail yourself of other areas of the Bill's Bible Basics website. There are many treasures for you to discover.