top of page

Welcome to the
C. Crawford Writing Blog!

ChatGPT: What Is It, and How Does It Work? (Featuring an Excerpt Written by ChatGPT)

If you're following the writing landscape much these days, you've probably heard about ChatGPT... and you've probably also heard people freaking out about it. I wanted to understand more about what ChatGPT actually is and how it works, and whether or not it is bringing about the end of writing and publishing as we know it like so many are worried it is (or possibly the impending doom of civilization, depending on whom you're listening to).


So I decided to play around with it a bit.


And the results were... very interesting.


But before I get into my main conclusions/takeaways from this experience, I want to bring you into the rabbit hole with me. (Because where's the fun, otherwise, right?)


So, I'm going to let ChatGPT tell you itself what it is, what it does, and how it works...and then I'll give you my thoughts on what I've learned from experimenting with this AI model over the past few weeks.

(The following indented excerpt was written entirely by ChatGPT, based on my prompt: Write a 500-700 word blog post on the topic of "ChatGPT: What it is, What it isn't, and How it Works.")


ChatGPT is a large language model created by OpenAI, based on the GPT-3 architecture. It is designed to interact with users in a conversational manner, responding to natural language input in a way that simulates a human conversation.
At its core, ChatGPT is a language model that has been trained on a massive dataset of text from the internet. This dataset includes everything from news articles and books to social media posts and online forums. By analyzing this vast collection of text, ChatGPT has learned to recognize patterns in language and generate responses that are contextually relevant and grammatically correct.
ChatGPT is not a human being, and it's important to keep that in mind when interacting with it. While it is capable of generating responses that may seem human-like, it is still a machine and does not have emotions, feelings, or beliefs. It is also not capable of understanding the world in the same way that humans do, so it may not always be able to provide accurate or helpful responses to certain types of questions.
One of the most remarkable things about ChatGPT is its ability to learn and adapt over time. As it interacts with more and more users, it is constantly learning from those interactions and incorporating that knowledge into its responses. This means that ChatGPT is always getting smarter and more capable of providing useful and informative answers to a wide range of questions.
So, how does ChatGPT work? When a user interacts with ChatGPT, they type in a question or statement, and ChatGPT generates a response based on the context of the input. This response is then displayed to the user, who can continue the conversation by asking follow-up questions or making additional statements.
Behind the scenes, ChatGPT uses a process called deep learning to generate its responses. This involves analyzing the input text, breaking it down into individual words and phrases, and using statistical models to determine the most likely response based on the context of the input. This process is repeated over and over again as the user interacts with ChatGPT, allowing it to learn and improve over time.
One of the benefits of ChatGPT is its ability to provide instant answers to a wide range of questions. It can help users find information on just about any topic, from science and technology to history and culture. It can also provide advice on personal issues and offer suggestions for things like books, movies, and music.
However, there are some limitations to what ChatGPT can do. It is not a substitute for professional advice or medical diagnosis, and users should always consult a qualified expert for issues that require specialized knowledge or expertise. ChatGPT is also limited by its reliance on the data it has been trained on, which means that it may not always be able to provide the most up-to-date information or the most accurate answers.
In conclusion, ChatGPT is a remarkable tool that can provide instant answers to a wide range of questions. It is based on a sophisticated language model that has been trained on a massive dataset of text from the internet, and it is constantly learning and adapting to new information. While it is not a substitute for human expertise or specialized knowledge, it can be a valuable resource for users seeking information or advice on a wide range of topics.


Okay, so... wow. A lot to unpack, right? Let's just hit some of the highlights of what I've learned while researching (and playing around with) ChatGPT.



1. ChatGPT has opened a lot of questions about ethics, just like Midjourney did for the art industry... or at least, sort of.


First, let me summarize what's been happening with the art AIs, to put this in context.


One of the main controversies surrounding the use of AI art-generating programs (like Midjourney) is that the AIs "learn" from existing art (often without the permission or knowledge of the original art's creator) and then use that data to generate the AI's assimilated variation (i.e., the AI's own, new piece of artwork) in response to user-provided prompts.


Basically, the AIs are creating derivative art based on the art by which they've been "trained"... or at least, that seems to be what many are concerned that they're doing.


Derivative work has long been a grey area in the rights/copyright arenas, with major concerns as to intellectual property rights (the massive fanfiction industry comes to mind as one example where these grey areas are at play), but if material is copyrighted or trademarked, technically the original copyright holder must give permission for a derivative work or adaptation to be created. In order to be considered "fair use," a derivative work must change the original enough to be "transformative" and thereby qualify as its own, separate piece of work... but those boundaries can be a bit fuzzy, as explained in this article from a copyright law firm.


Then there's also the issue of who the copyright belongs to for that AI artwork, if it does qualify as its own, original piece. Many people believe that they automatically own the rights to art they create via an AI, but some have argued that the rights should belong to the AI platform/the company that owns the AI, while others argue that AI-generated art is not human-created and therefore not copyrightable at all (which would mean anyone could use any given piece of AI-generated piece of art, without needing permission of the person who generated it or the AI platform it was generated through).


While some uses of these art AIs might be harmless--and the output may not directly resemble any recognizable source material--ethical usage becomes far more of a concern when users input prompts that direct the AI to replicate a specific piece of artwork or to imitate a particular artist's style or work. Because then it often does closely resemble the source material... and some authors are quite upset about their art being used in such a way without their permission.


Since the legalities of AI art--and where those boundaries fall--are still being debated, this is also now a concern for authors, business owners, and other people who might purchase art for covers, logos, etc... because unless the creator/seller discloses AI usage, we could be purchasing AI-generated art without even realizing it.


And this is just the rights discussion. I haven't even focused yet on what it means for professional artists that AIs can now quickly and reliably generate passable artwork for anyone with access to the internet. Though AI-generated art may contain some AI "tells" that some discerning eyes can spot, overall the quality of this art is high, and it can be produced in seconds... which has many hardworking artists concerned that it may put their own livelihoods at risk.


On the other hand, the ability to efficiently produce high-quality artwork (at free or very low cost) is a massively useful for resource for many businesses and individuals, providing access to artwork that financial and time constraints would have otherwise prohibited.


If you set the panic of human artists being replaced by bots aside, the controversy is mostly just a question of how (and whether) AI art can be done ethically.


So, yeah... the recent explosion in the use of art-generating AI programs has basically sent an entire industry scrambling to find a new (legal and ethical) balance.


ChatGPT and other AI writing programs have created a similar scramble in the writing and publishing industry... but it may not be exactly the same sort of scramble.


While ChatGPT is also "training" itself by consuming wide amounts of data from the aether (i.e., the internet) and other text-based sources it's been fed, it is not simply smashing those elements together as derivative work in response to a prompt like I've heard the art AIs described as doing.


(Let me note again here that it's possible that's not what the art AIs are doing either--they may actually be generating original content based on what they've learned from consuming artwork... but many of the strong concerns I've heard voiced over art AI seem to center on this point of contention.)


In any case, ChatGPT is actually learning from the inputs it has gleaned from text-based sources on the internet--everything from literature and website content to social media posts (and also the inputs of users in real-time as they're using it, which I'll come back to in one of my other points below). ChatGPT then uses what it's learned to predict the best, most rational and natural-sounding response to user prompts and questions, based on its understanding of language and the information it has assimilated as "true" from its database.


To put this simply: as far as I understand, ChatGPT is essentially like a very, very advanced "Autofill" predictive text program, combined with all the massive knowledge and data points on various subjects it's gleaned from the internet.


(Note the emphatic use of "very, very advanced," because it's definitely more complicated than that "Autofill" makes it sound--it can even follow complex instructions to generate very specific results... some of which I'll detail below).


But from what I can tell, ChatGPT is predicting natural-sounding language output based on what it's learned, and can follow complex instructions rather than simply creating derivative or imitative versions of its source material based on general keywords or prompts... so it may not be working quite the same way as the art AI. (It's also quite a bit more unsettling, in my opinion, than how the art AI works--albeit fascinating--but I'll come back to that point in a minute.)


Obviously, ChatGPT does still raise some ethical concerns (especially if it's being used to generate content that's then sold to a consumer without disclosure of AI use). And there is still the potential ethical concern of ChatGPT sampling from existing content in its learning process, possibly without permission.


So what does this mean, then, for people who use or sell ChatGPT-generated content in their websites, books, blogs, etc.? Well... that's still unclear... and depends a lot on the context and usage, in my opinion. (I'll get into this more in points 3-5 below).


Also, it's important to note here that since ChatGPT has a vast informational database, it can tell you fairly reliable information on major, well-documented subjects (particularly anything prior to 2021 when the database was updated). But ask it about something more obscure--like, for example, who Crystal Crawford is--and it... just makes stuff up. Like, literally. I tested it, and so did an editor I know. It created a whole, legit-sounding biography paragraph for him and... none of it was even true. So, that brings me to point #2...


2. You can't blindly trust ChatGPT


I don't mean this in the creepy "don't do what the AI overlord tells you" way, though I suppose that applies, too. What I mean is...


ChatGPT is not a research database. It's an AI whose sole purpose is to sound (actually, to read) like a human response. So, just process that statement for a moment. ChatGPT's responses are meant to seem human and natural-sounding. Not to necessarily provide accurate or truthful information.


I suppose the same could be said about some humans on the internet, actually, but that's a whole other topic for another day.


Despite is sketchiness as an info source, I found that ChatGPT was useful as a brainstorming device, and even as a starting point for information (for example, the whole excerpt above about what ChatGPT is). Because it does have large amounts of data accessible (though it's not live-accessing the internet, so its database only contains whatever it accessed up through about 2021), it can provide information on well-documented topics. But as I mentioned above, it does not perform well on more obscure topics... and even when it's telling you the answer to something (and delivering that answer quite confidently, I might add), it could still be 100% wrong. I've even seen videos where ChatGPT "changed its mind" on something when the person questioned it... because, again, it's responding to YOUR prompts and adapting to give you the information YOU ask for, based on a combination of the data it has available and its own predictive algorithms.


So, basically...fact-check anything it tells you because you can't trust just blindly trust it, even if it's blatantly telling you something like it's fact. But that pretty much goes for the entire internet these days, right?


I also need to state here that ChatGPT isn't human, and should not be treated like a companion or friend... because... as it told you itself in the excerpt above, it's not human and has no feelings. I would hope not to have to specify this, but recent news headlines prove otherwise. So... If you are in need of an actual friend to converse with, please don't turn to ChatGPT. If you really do need someone, reach out. (You can find my email address in the footer on this website.) Chat-bots will never be a substitute for actual human connection, no matter how realistic the conversation sounds. (And, as I stated above, you can't necessarily trust what ChatGPT tells you, anyway.)


But that doesn't mean ChatGPT isn't useful.



3. ChatGPT is useful for some things... like, really useful


I went down the research rabbit-hole on this (with help from my husband, who has become fascinated with this whole topic since social media and marketing, primarily online marketing, is his area of business)... and I found that there are some really helpful things ChatGPT can reliably be used for... and without massive ethical concerns. Here are some examples:

  • Organizing data into tables for easier use (yes, you can copy/paste data into ChatGPT, and as long as you've tagged the data clearly and you give ChatGPT clear instructions, it will MAKE a table for you. It's... actually really cool.)

  • Brainstorming (I've asked it to generate possible hashtags related to certain post captions, when I was having trouble thinking of some on my own; I've also asked it for basic starting-point ideas for various projects, basically dialoguing back and forth with it like it's a brainstorming partner. This, in particular, was fascinating because some of the ideas it gave me were terrible, but then I started asking clarifying questions and it learned from my previous questions and applied that context to my new questions and as it understood the context better, its responses got more and more helpful. I might do a longer post on this later, but it was very interesting, to say the least.)

  • Quickly drafting and organizing outlines on non-fiction topics for which I feed it very specific informational prompts (and then subsequently fact-check its output, because, as I said, it can't be trusted)

  • Summarizing a longer video description/longer post (which I wrote and gave it) into a 1-2 line caption

  • Rewording phrases or playing around with wording of phrases for blurbs, titles, etc. (Ironically, while I was asking it to help me write catchy phrases reworded from what I already had, it told me it WASN'T allowed to write "clickbait" headlines but could help me write something catchy and attention-grabbing, which I found amusing.)

  • Getting unwanted psychotherapy from an unqualified chatbot... apparently. (I stumbled upon this one by accident when I asked it to help me brainstorm potential plot events, centered on a theme I specified, for a character whose backstory & struggles were based on one of my own struggles as a teenager. The prototype synopsis it produced was actually decent storytelling, in terms of a character arc, and also felt a little like a low-key therapy session since ChatGPT detailed how that character would overcome her struggle, gave her a very moving triumph moment, and decided to throw in key words like "empowerment" of its own accord. I don't plan to use this exact plot suggestion, but... it was definitely interesting.) Restated caveat to this point: please don't actually use ChatGPT as a therapist. It is not human, has no feelings, does not care about you, and sometimes just blatantly lies. These are all things that you wouldn't be able to say about an actual therapist...hopefully.

  • Generating VERY BAD poetry and plot outlines, just for my own amusement. (To be fair, the poem wasn't THAT terrible but it also wasn't that good and didn't exactly follow the parameters I gave, which was probably my fault on how I phrased the prompt. The plot outline, on the other hand, was basically a patchwork of genre tropes mashed together with massive plot holes, and felt like a very messy, mediocre movie script. Which leads me to my next point...)

4. I don't think ChatGPT is close to ending "the world of writing and publishing as we know it"... yet.


From my experiments, I found that ChatGPT does some things really well... sometimes passably well, even while writing poetry and fiction... yet on the whole, there are still major issues that distinguish its content from content produced by a skilled, professional human author. It's lacking in nuance (though it was great with heavy-handed application of tropes), and in the ability to produce a plot that doesn't feel contrived or have major holes, so experienced fiction authors are probably safe for now from having ChatGPT as a legitimate competitor. And while it produced a passable poem, it wasn't really a great poem, so skilled poets are most likely safe for now, as well.


However, the distinction between ChatGPT and a... well, not-so-great author... is harder to peg. New authors often have a lot of the same blunders or "heavy-handed" feels to their writing (and I'm not judging or meaning to discourage! I have been there and am still there, in certain genres or new applications for my writing). This fact has made it very difficult for publications to determine confidently whether a piece submitted to them was AI-generated or simply written by an author who's still learning his/her craft (or possibly an author writing in a language other than their native tongue). Which brings me to my final point...


5. It does still impact authors. Especially indies. (And not really in a good way.)


Because of that final point from #4 above, some conscientious editors (like the one at Clarkesworld, recently) have found themselves inundated with low-quality, probably-AI-generated submissions, but are struggling to sift through these efficiently without accidentally banning a human author for a legit submission mistaken as AI.


And yes, they do ban anyone who submits a verifiably AI-generated piece, and many publications are now putting more and more systems in place to identify these reliably... so, please, don't attempt to build your author career by quickly AI-generating a bunch of stories to submit to fiction markets. They won't be up to par with skilled, human-written stories, but the editors will have to wade through them to get to the ones that are...and you won't get paid or gain any benefit from submitting a story that's ultimately rejected (or that gets you permanently banned, if they can verify it's AI), so you're only wasting everyone's time.


(And frustrating loads of people. And making it harder for legitimate authors to get their stories seen. Seriously, just... please don't.)


On platforms like Amazon/KDP Publishing, where there is no human gatekeeper between the individual author and the self-publishing process, it is possible to make a quick buck by generating loads of sub-par AI content, publishing it, and using your sweet marketing mojo to convince people to buy it. Since readers have to buy the book up front, you may even make some quick money on your initial sales before people realize it's poorly written and start to leave you rage-filled reviews. It is possible.


BUT... just... please, don't.


Even if you don't have the scruples to care that you're manipulating readers into buying something that's poor quality or isn't what it seems, scammers who do things like that impact how all the rest of us are viewed as indie authors/self-publishers. We've worked so hard to establish legitimacy for self-publishing/indie-publishing over the last several years, and this latest abuse of AI-generated "books" is tearing relentlessly at everything we've worked so hard to build.


Already, we're seeing an increase in Amazon bot-controls, in an attempt to clamp down on this abuse of the system. But guess what? The bots aren't fool-proof, and already it's resulting in some indie authors getting their books pulled down and their accounts suspended or even permanently closed... all because the bots were triggered by some keyword or unknown data point. It's meant to crack down on scammers, but some innocent, legitimate authors are getting caught in those nets, too.


And it's almost always indie authors--like me--who are impacted by this. Not the big-name authors with the money and influence of the publication companies behind them... because, for obvious reasons, they have the leverage to make sure Amazon doesn't insta-ban them over a stupid bot error. But indies? Not so much. And sometimes, Amazon won't reinstate those indie authors' accounts, even after the authors explain the error. For real. I'm not kidding. (This is another topic, too--Amazon does a lot of trampling of the little guys, particularly indie authors, and it seems to be getting worse. But... that's for another discussion.)


There is no short-cut to becoming a successful author. If you produce low-quality books (ChatGPT-generated or otherwise), you will not be building a sustainable author career. You will be making a quick buck at the expense of readers' trust and to the detriment of other authors who work very hard and in some cases have worked very hard for years and years at their craft. ChatGPT is a tool. Not a "become-an-author-with-one-click" golden ticket (despite all the rage-inducing YouTube ads I keep seeing to the contrary). But that hasn't stopped some people from trying to use it that way, of course. [insert dramatic, exhausted sigh]



So, in conclusion, here are my two cents on ChatGPT in a nutshell:

  • ChatGPT is not evil. There are ways it can be useful.

  • There are also plenty of ways it can be (and is being) abused.

  • Those abuses harm authors.

  • Legit uses of ChatGPT as a productivity tool is fine.

  • Trying to pass wholly ChatGPT-generated content off as your own original work and thereby somehow "shortcut" the path of hard work and skill development it takes to become an author is... not fine.

  • One day ChatGPT might legitimately create stories that are indistinguishable from human-written ones, but today is not that day, so... nobody panic. (Yet.)

  • Please don't use ChatGPT to scam readers and further harm hard-working indie authors and seriously overworked magazine editors. Okay? Thanks.

  • But use it for what it's useful for, for sure! (I have to keep saying this, because I want to make it clear that though I see the reasons for concern, it is also a fascinatingly useful tool, and can be used ethically within limits, in my opinion.)

And also, in case you were wondering... all my books and stories are human-generated. You know, because that's apparently a thing we need to say these days. ;)



Do you find ChatGPT to be a fascinating tool, or are we well on our way to being overtaken by Skynet?


Let me know your thoughts below!



Support my writing & get exclusive content and perks through MY PATREON!

bottom of page