Chatbots Probably Couldn’t Write This: Moral and Ethical Considerations of Generative AI

By Michael Mayes, Content Writer

Estimated Reading Time: 8 minutes

As a technical writer and researcher at Deepwatch, I have a few thoughts on generative artificial intelligence tools like ChatGPT. So as not to bury the lede, my personal security concern lies more in the ability of these tools to generate malicious code or malware, not that they can write a blog post or become sentient. I suggest content creators concerned about GAI should take a lesson from other emerging technologies, like cryptocurrency and blockchain. Regardless, perhaps tools that are designed to mimic human intelligence will encourage society to be better at what we create.

Ethics in Emerging Technology

In my role at Deepwatch, I create a variety of content. I’m most passionate about security technology and processes, those tools that find needles in haystacks. My cyber career began with research on the first anonymous marketplace, the Silk Road in 2013, and the new money everyone now knows, Bitcoin. I also tracked ransomware attacks from the days of the first leak sites in 2019, and today I geek out over Deepwatch ATI reports, the latest ransomware strain or technique, and the evolution of malware.

I’ve developed marketing and technical content for emerging technologies and industries my entire career, first as part of the Human Genome Project in college. Next I was part of the mobile software revolution. I helped shape language in cloud computing from its early days, and today sit in the middle of a cybersecurity industry struggling to define itself (what exactly is XDR?). The point is, I’ve been fortunate to join more than one technological revolution. That excitement is one of the primary reasons I love my work at Deepwatch.

Of course I strive to always create and act ethically, as someone shaping the language of new technology, and to raise early awareness of the moral, ethical and economic impact of new tech, to look ahead to the impact of new technologies on society and the environment. That interest comes naturally to me as the grandson of a brick mason who built the hidden city in Oak Ridge, Tennessee, assembling the bomb dropped on Hiroshima; and as the son of a U.S. Navy submariner onboard nuclear-armed submarines.

In 2018 I wrote an article On Bombs, DNA and Blockchain, examining the moral and ethical dilemmas present in distributed ledger technology. My concern then wasn’t that cryptocurrency would replace the Federal Reserve, or that immutable databases were going to replace the legal or real estate industries; my concern was that there wasn’t enough real dialogue about their potential application or impact.

Ask Questions Early and Often

Today I hear many of the same concerns when people discuss generative artificial intelligence platforms  and large language models like ChatGPT and Bard. Like cryptocurrency and blockchain technology, many people are concerned they will eliminate jobs, upend the natural order of creation, or even hasten the demise of all humanity. Think tanks like the Carnegie Council for International Affairs have already begun to generate discussion with their Artificial Intelligence and Equality Initiative. My hope is that organizations like theirs will foster early public dialogue about the moral and ethical dilemmas in AI.

Fact is, we haven’t always had those moral and ethical conversations. Scientists who were part of the Manhattan Project began feeling deep remorse long before the bombs were dropped. The questionable morality of dropping such a bomb on civilians and its lasting, destructive power profoundly disturbed many of its creators. Less than 2 months before the first bomb was deployed, Manhattan Project scientists at the metallurgical lab in Chicago submitted the Franck Report, asking that these new weapons not be used, and to consider the broader issues to the country and the world.

Unlike the Manhattan Project, health scientists on the Human Genome Project created the Ethical, Legal, and Social Implications (ELSI) program at the beginning in 1990 as an integral part of the mission. What would the ability to alter genes mean to society? Would someone try to make an army of super soldiers, or would we commercialize the customization of fetuses? Thankfully, these ethical questions were discussed from the beginning.

Cryptocurrency and Blockchain

Cryptocurrency and blockchain technology caused similar hand-wringing. Satoshi Nakamoto released the Bitcoin white paper in 2008. Its first use case and proof of concept was a dark market that sold drugs and other illegal items. Today, according to CoinMarketCap, there are over 20,000 different cryptocurrency projects with a market cap of over $1.15 trillion. While the ecosystem of ransomware today would not be the same without crypto, the truth is cryptocurrency wasn’t made for cybercrime, it just makes it much easier

Beyond market ethics of cryptocurrency, other dilemmas exist. For example, what would a world currency look like, and how would it impact fiat currencies or destabilize monetary policy? How much energy will mining require in the future? How do cryptocurrencies place distance between the haves and have-nots, or simply the tech-literate and those without wifi? Who are the visionaries discussing moral implications of immutable, global, decentralized databases? What human costs might be incurred in the proliferation of immutable databases? What destructive code lurks in the best DAO? 

Critics of Bitcoin suggest cryptocurrencies undermine the fiat currency structures of countries around the world, and disrupt financial markets that survive through regulation. Many see the hype surrounding cryptocurrencies as scams or ponzi schemes designed to dupe unsophisticated investors, and that digital currencies have no intrinsic value. Since most computer users are unaware of the complexity behind the applications they use, many people won’t fully understand encryption, distributed ledgers or token economics, making them targets of the technological elite.

AI is the New Blockchain

I suspect generative AI tools will no more replace content creators than blockchain will replace all database applications. One big concern in 2015 was that decentralized, blockchain-based systems would eliminate entire industries. Some suggested we would no longer need attorneys, that contracts of the future would be immutable, instantly executable, and flawless. Many law firms I spoke to then, such as Jones Day, now have divisions that specialize in smart contracts. Blockchain didn’t eliminate the need for attorney’s, it created new opportunities.

Blockchain applications have seen real world applications in logistics and food safety, for companies like WalMart. The fever pitch three years ago around blockchain tech, however, has waned. One of the costs of decentralization is that it is not very efficient, it can be slow and requires a great deal of energy. We centralize because it is faster and more efficient. Blockchain can be robust, more egalitarian, and is making us look at ownership authentication differently. But its application is limited. Blockchain systems will not eliminate human interaction with databases.

I foresee a similar outcome from generative AI and large language models. Writers–content creators, should craft messages based on empathetically knowing an industry, personally engaging with new technology, and intuitively analyzing disruptive changes to society if they want to communicate effectively with human, critical thinkers. As I’ve heard it said in cybersecurity concerning automation: as long as there are human threats, we will need human defenders. As long as there are human communicators, we will need humans to make real connections in a world where trust is in short supply.

Generative AI’s tendency to hallucinate, or make things up entirely, and its absence of a moral compass will become a bigger problem, even as the tools improve. ChatGPT’s strength and weakness lies in its pull of data from billions of web sites. Garbage in, garbage out. Better AI will emerge that is more selective in its training. But all will retain the inherent biases of their training.

Still, over 100 million users signed on to ChatGPT just two months after its release, making it the fastest-growing consumer application in history. As a former educator, I think that is partly because good writing is hard for many people. It takes time to create something compelling. Few other disciplines inspire so much anxiety as writing. The true magic, as I’ve always told students, is in the editing process. That process will still take time and skill, even if tuning AI tools. In the near future, there will be a simple way, for users of those large language models, to train them for their individual needs.

Deepwatch and AI

Let’s be clear. ChatGPT is a chatbot that uses artificial intelligence to answer questions and perform tasks in a way that mimics human output. Users can create documents, write basic computer code, and mimic other types of communication. The service is designed to actively block requests to generate potentially illegal or harmful content. Already, hackers are finding ways to circumvent controls, and even provide it as a service, a way to outsource getting around ChatGPT restrictions. Knowing the history of malware and exploit code sales on the dark web, I expect to see those services grow. As might strain variants of ransomware.

One Deepwatch ATI analyst had thoughts: “Both nation-state threat actors and cybercriminals will use generative AI for native English language opportunities, like crafting a very convincing phishing email. We also expect adversaries will use the technology to make small but impactful changes to avoid detection. On the flip side, network defenders will start employing the technology to triage security events and prioritize security alerts.” 

When it comes to generative artificial intelligence tools, it does indeed seem to me that we’ve skipped discussion about how it may be used to do harm. Perhaps if we’d had more discussion about the potential impact of Facebook on society, or Instagram on mental health, we would have done things differently. Regardless, we must now begin the hard work of considering the larger impact and consequences of GAI on society, engaging in better dialogue early, similar to the way the Human Genome Project approached moral dilemmas, not after the fact, like the Manhattan Project.

See our upcoming blog on AI, ML and MDR for more on this topic. Register for Deepwatch blogs and cyber intelligence reports.

Michael Mayes, Content Writer

Michael Mayes is a content creator at Deepwatch and a certified OSINT analyst. He has over 20 years in marketing communications and media relations for disruptive technologies in highly-regulated industries. Publication on topics includes cloud and mobile security, cryptocurrency, ransomware, and dark web markets.

Read Posts


LinkedIn Twitter YouTube

Subscribe to the Deepwatch Insights Blog