Plot twist: they had ChatGPT write the code.


And someone else used ChatGPT to expose them


It was CS101 Recursion Lesson!


It was CS101 Recursion Lesson!


It was CS102 Recursion Lesson!


ChatGPT: *An error occurred. If this issue persists please contact us through our help center at help.openai.com.*


ChatGPT: *An error occurred. If this issue persists please contact us through our help center at help.openai.com. *








Bruh, computers these days are so powerful, they have nested stacks!


It was CS103 Recursion Lesson!




So they used ChatGPT to detect ChatGPT?


"I used the stones to destroy the stones" edit: damn it I just realised I wasn't first to type this


Now that’s an adversarial network


It's ChatGPTs all the way down..


"I used the stones to destroy the stones" feels appropriate




No, you're Thanos


"Did you write this essay, ChatGPT?" "Yeah." We gottem




It would be funny if behind the scenes the students app just asks ChatGPT "is this something you would write" and then it just wraps a boolean check function around that call.




Wait, does that make this a GAN


If the path to AGI is a GAN inside a chatbot I'm gonna shit


...and if it's not? Are you going to stay constipated for the rest of your life?


As long as I can


You blew my mind.


what's gan again?


https://en.wikipedia.org/wiki/Generative_adversarial_network >The core idea of a GAN is based on the "indirect" training through the discriminator, another neural network that can tell how "realistic" the input seems, which itself is also being updated dynamically. This means that the generator is not trained to minimize the distance to a specific image, but rather to fool the discriminator. This enables the model to learn in an unsupervised manner.


Gonna go ask it to do it haha. Lets see how it goes ​ Edit: Here it is. I dont code so i dont know if it works or not. Someone else can try import java.io.*; import java.util.*; public class AIEssayDetector { public static void main(String[] args) { // Read the input essay String essay = readEssayFromFile("essay.txt"); // Calculate the readability score of the essay double readabilityScore = calculateReadabilityScore(essay); // Check if the essay was written by AI if (readabilityScore > 5.0) { System.out.println("This essay was likely written by AI."); } else { System.out.println("This essay was likely written by a human."); } } // Method to read an essay from a file public static String readEssayFromFile(String fileName) { String essay = ""; try { BufferedReader br = new BufferedReader(new FileReader(fileName)); String line; while ((line = br.readLine()) != null) { essay += line; } br.close(); } catch (IOException e) { e.printStackTrace(); } return essay; } // Method to calculate the readability score of an essay public static double calculateReadabilityScore(String essay) { // Add code to calculate the readability score here // You can use any readability formula, such as Flesch-Kincaid or Gunning Fog Index return 0.0; } }


// Add code to calculate the readability score here F




It just works!


Jobs done!


Im still amazed how good this is. Really excited for GPT-4. If its as good as insiders claim, we are in for a reality check an order of magnitude higher.


I would guess that it will actually not be such a considerable difference. The main culprit behind the success of GPT is transformers, the model obviously improved tremendously from the gpt-1, but mostly because they found a way to efficiently increase the number of layers. GPT-3 already has a huge number of parameters. It was a much better model compared to gpt-2, mostly due to the size difference(x100), but doing the same with GPT-4 is not feasible with the current limitations. You just can't have a model with 10-20 trillions of parameters yet. And we didn't had any breakthroughs in language processing comparable to transformers. So, basically unless they invented some new groundbreaking way to in which to build a language model - I wouldn't expect a difference even close to what we saw with GPT-2 > GPT-3 transition. TL:DR - I think we are hitting the point of diminishing returns with language models, and making them significantly better will be orders of magnitude harder.




And never has


We are all still angry and bitter.


And you wrote this comment using ChatGPT


I mean ChatGPT would be the authority on whether or not it wrote a specific essay.


I used AI to destroy the AI ~ The developer probably


I used the AI to destroy the AI, to improve the AI [Generative Adversarial Networks](https://en.m.wikipedia.org/wiki/Generative_adversarial_network) :)


Was also thinking this, GAN, just add the GPTZero verdict back as training data


I tried using a GAN to generate pokemon, all I got was lumpemon


This battle will be legendary and scary


If this GPTZero is truly accurate, it has the power to make ChatGPT (and maybe other models) reach amazing potentials.


Yeah, but you could also train the GPTZero at the same time, right? So they'll keep improving each other until the singularity is reached and humanity has lost.


Yes, that's what the commenter above us said. That's called a GAN, a Generative Adversarial Network.


I’m not John Connor there’s no downside to this future.


send a maniac to catch a maniac.


Hey look! This guy doesn't know how to use the three seashells!


Oddly enough it’s the exact same code we’ve seen for Even/Odd numbers


if essay % .real == 0 We gottem


GPTzero isn't very effective and an existing detector was already written back in the days of GPT-2 that still works well with ChatGPT https://openai-openai-detector.hf.space/


I'm glad someone else knows about this. I've been using this to check through so many essays, and it works like a charm. Edit: im using it mostly for engineering reports and summary essays. It seems to detect technical portions better due to how frequent the chatbot repeats something when it doesn't understand.


I tried a few different text samples from both my own writing and ChatGPT, and I would characterize it as unreliable at best. For longer samples, it seems to be more confident, but it's possible to write in a way that will misidentify human-written text as fake. It also believed that some of the ChatGPT samples were human-written, which suggests it's looking for something superficial in the text that should not be relied upon to detect generated text. I can't believe that we're going to be scrutinizing student papers based on writing style, which could lead to accusations of plagiarism where they're completely and demonstrably false.


It's just not possible. Students will touch up the generated text to correct some janky parts anyways. At the end of the day, there are people that are worse than ChatGPT at writing.


You're overestimating most students, really. Most will not correct it, they will get caught. If someone corrects the parts, at least they read it and know how to fix it. So they still did some work, so I'm not worrying that much about them. The best part as a teacher is get student work and start asking questions about the content. Works with essays, works with code, works with tests.


I've never had a teacher ask me questions about an essay, but sounds like a pretty good method to know if they did the research and understand the material. Much better than the impossible detector.


Any plagiarism detection, be it by googling manually suspicious parts, comparing against other student work, or comparing programmatically from some bigger database, or this one, should preferably be used only as part of the verification. So it should be a first step and then questioning should be used. Or you give student(s) 0 and they can argue about it (works at uni level). (In case of plagiarism between two or more students in the same task, all get 0, we don't assume.) Just listen when they have real arguments, don't be like this: When I was a student at uni, like 5 years ago?, one of the lecturers zeroed one score because code was copied from pastebin... The date of paste was after the lab, it literally had the username pointing to that one student - they posted it after the lab to compare with other people, but before the code was actually checked for plagiarism. Lecturer was shit and didn't give points even after being shown all proof...


>it should be a first step and then questioning should be used. Comp sci uni lecturer here, and that is exactly how it works at my uni. Some parts of the process are pretty convoluted (submit report, reviewed and allocated centrally, some types of misconduct require a panel interview, etc) but works pretty well. It's usually clear to see if a student has any idea about the coffee they submitted with a few simple questions. If it checks out, case dismissed and no penalty/record. Sadly, I feel that some lecturers turn a blind eye to it or don't look closely enough to spot issues so a fair bit goes unreported... Either because of the time it takes, not caring, or trying to juggle competing objectives of academic rigor vs. pass rates...


Agreed and I feel you. Professors with tenure are almost always ruthless. They just don't care anymore.


Plus you get tired of the battle.


Yea I think as a student, id still use ChatGPT but to help with the research and building onto ideas and helping get through rabbit holes more quickly. Also fact checking the things that ChatGPT says and building off of it. It is a very remarkable tool that can be used as an aid, not to do all the work.


Yep, using it as a kind of glorified search engine is the best option. Sometimes you doesn't know what to Google, so you can use it to re-formulate the problem for you etc. But the most important thing is that manual check later, as you said. A lot of people don't do that. They confuse confidence with being correct. On programming discords we get a lot of kids who praise chatgpt or come with problems and say "but chatgpt told me to use this" but the method/function it used doesn't exist (because it basically cannot use libraries, it works for vanilla code)...


I agree. I worked as a TA and at first it was saddening to see the little effort they’d put. After a while I was gladly handing out 0s. I saw so many copy pasted repositories — line by line with the same comments — You could easily remember the good students from each class. Unfortunately the faculty wanted more money so even if they were caught, they wouldn’t suffer any consequences aside from a bad grade


I wrote a text about oranges, it was 11.8% false, chatgpts text was 0.1% false- am I real?


What kinds of essays?


so much kind


Him fail English? That unpossible, he an English teacher


update: it rated my essay which I wrote entirely by myself a 40% fake rating


it is rating everything a 0.02% fake, even the responses i directly generate to test it out


The GPT-2 detector marked my human written response as 97.33% fake. These GPT detectors don't work well purely because GPT writes almost exactly like a human would. Edit: Tried it out on GPTZero, also marked my human written text as most likely AI written. Prompt: Nuclear energy is a clean and efficient energy source. It works by using radioactive materials to boil water, which in turn, causes a turbine to turn generating electricity.


It does say that it gets more accurate at 50 tokens. But yeah I hope the people rating student's essays aren't depending on these fully.


No matter what I write in there it flags it as really, really fake.


Maybe you're a bot...


This seems to only be able to detect "AI" in English Texts, everything I generated from ChatGPT in German is "99% Real"


"Ayo bro, german isn't real, it's an AI generated form of communication people dont use, yo"


It detected an essay I wrote as AI, and an email an AI wrote as human written.


99.41% Real for an essay created by chat GPT. This thing is less than useless.


There is no solidarity among programmers, and there never will be. We are freelance mercenary wizards who will undo each other's works as long as the money is good. *"Without question? No, I'd ask how much."* \-Ser Bron


Which in turn also improves us to outwit each other.


Didn't you mean to "to outwit one another."?


“Ads!” “Adblock!” “Evade the block” “Block the evasion”


"DNS blacklist ad domains" "randomly generated DNS names" "Invent Skynet to prevent ads on the internet all together" "create the Matrix to directly stream ads into our brains" "Nukes the planet to EMP all machines and go back to stone age" "writes advertisement for Brog's stone circle on cave wall" and the cycle repeats


Plot-twist: humanity has been stuck on a loop for trillions of years


They thought it was impregnable. Gimme a server rack and a week, I'll impregnate the bitch!


>There is no solidarity among programmers, and there never will be. When it's Friday at 6pm and the whole office is cracking beers and I've submitted a CR with lots and lots of lines and my colleague reviews it in 30 seconds, that's love .... that's love. Sure my shitty commit is gonna fuck us in the very near future, but that's love, that's love.


Just like any other mercenary of the corporate world. What ever the customer wants, we will deliver and in turn destroy another man’s life work.


Why should there be solidarity here? The point of the AI isn't to be a plagiarism machine.


It’s low-level mobster mentality, “snitches get stitches.” Put ’em all in jail I say.


"Infinite are the arguments of mages" Ged, *Tales from the Earthsea* by Ursula Le Guin


Wow, we really are all Bron. Last week I told my buddy I’d never quit my job bcz I do so little and make enough. Then I got an offer to double my salary, and even tho I’ll probably have to work a little more, I still only have to work 40 hours, but the number was right.


To be fair, *all* programming is an effort at circumventing someone else's programming.


I will get "freelance mercenary wizard" tattooed somewhere, this is beautiful, thank you so much


Old saying is, "Diamond cut Diamond"


Lasers also cut Diamonds


lasers cut lasers


lasers can cut a lot of things to be honest, never get too cocky because you never know who might have a laser


I thought it was "snitches get stitches"


Laser beams on the other hand...


magic must defeat magic


"ChatGPT write an essay that isn't detected by GPTZero about..."


It only has knowledge up until 2021 so it won't know more recent developments. It will pretend it does though.


Maybe in chatGPT 5


Gptzero, oh so bright A model that works day and night No task too big, no problem too small Gptzero can do it all With its massive size and deep learning power Gptzero's ability to predict is dower It can generate text and analyze data Gptzero is truly first rate A true marvel of modern AI We salute you, gptzero, bravo and hooray! -Totally didn't get chatgpt to write a poem about gptzero


Put that poem into GPTZero. #I dare you.


Adversarial training in real time LOL


That's only if openai train against this detector and release a new model. The existence of an adverserial model doesn't magically make the existing model better.r


They are actively always improving it.


Anyone genuinely committed to getting a degree without learning a single thing will just have to continue to pay a human being to write their essays. Those are the only thing that can get a pass from every anti-plagiarism tool


Imagine someone's career plan is, "I want to get a bunch of jobs then fail at them and get fired"




![gif](giphy|z5hNwC2O7GbyU) ChatGPT now


Mind crush


Ok... but why not use the AI detector the OpenAI team used when creating ChatGPT?


Imagine how annoyed you'd be if schools used this software and your essay was a false positive.


Just had this conversation with a student yesterday who was not at all pleased he was going to have to rewrite a paper after the first one was flagged as possibly written by AI.


My teachers always hounded me about how writing outlines and first drafts would improve my grades on essays. Of course, in school, I didn't care and just wrote the essay at the last minute. Perhaps this trail of planning will become required work that must be submitted with the essay.


Hey chatGPT, write an outline for the following essay:


This is the thing I'm using ChatGPT right now for because it's completely unusable to write longer and detailed texts but it's great to generate structures.


“Show your work” was a part of every math question as a high school kid in the 90s. I guess this AI stuff is to essays what calculators are to the math student. Oddly enough, should it come to this, it will probably make students better planners and writers as they will be forced to plan and write properly.


I'd be raising fucking hell.


If my uni did something like that, I’d be so furious that I’d consider transferring. My tuition money can go to a school that doesn’t falsely accuse innocent people, thanks.


Dosent that just push the kids to use AI anyway? “Well if my work dosent matter Il just use AI”


I'd talk to administration about it if I wrote a whole fucking essay myself and it got flagged. That's bullshit. Plus, there's no guarantee that after you change it, it won't just get flagged again.


That one kid who reminds the teacher they forgot to collect homework


Eh, this is a little different because it’s large scale plagiarism, which is a way bigger issue than some homework


Fascinating fun fact: when it comes to The Great Gatsby, teachers can’t use plagiarism detection programs because it flags it every time. Everything of value that could be said about The Great Gatsby already has been said.


This sounds really interesting, do you have a source for it that I could look more at? A cursory google search didn't throw up anything so I was wondering if you knew.


Source: I made it the hell up but it sounds cool


Good - maybe they’ll stop making it required reading then, lol


Yeah, I definitely hated reading the Cliffs Notes of that book in high school.


Plagiarism is the fraudulent representation of another person's language, thoughts, ideas, or expressions as one's own original work. chat bot isn't a person lol ( i'm only joking, it's 100% IMO just saying that currently the definition needs to be updated )


Imo it isn't plagiarism, because of exactly the reason you gave. But it's still academic dishonesty, just a different form. You're supposed to write your essays *yourself*.


Basically the equivalent of cheating by paying someone else to write your assignments for you. You didn't write it yourself, something else did it for you, like mashing autocorrect to have it do the essay instead.


If it is easily plagiarisable, then the question is stupid imo.


Ugh fuck that kid


Irony, just asks ChatGPT of it wrote it


Doesnt even work. I copied two paragraphs from a random blog about sunflowers and the answer was "More data may be needed to determine if your text is human or AI generated. Try inputting more text." Then I generated two paragraphs with ChatGPT about sunflowers and the verdict was: "Your text is likely human generated!" So... yeah... idk :D


An essay does tend to be more than 2 paragraphs, likely does just need more data


What if I write an essay and use chatGPT when I get stuck or need help? Still seems like the same issue. I wouldn't think many people would ask for essays from the bot wholesale.


>I wouldn't think many people would ask for essays from the bot wholesale. You overestimate many students, then. Tons of high school kids and comparatively less, but still many, college freshmen would *absolutely* ask a bot for a wholesale essay and then turn it in without so much as even a proofreading.


I feel like some university somewhere will implement it and be sued when someone is expelled over a false positive


You should try the detector made by openai itself. I'll say its 90% accurate, much better than what the student did. https://openai-openai-detector.hf.space/


Does real mean it was created by an AI? Because I asked it for a book review of brave new world, put it in, and it said 99.48% real


Real means its written by a human


So then it just got it insanely wrong, cool. As an experiment I wrote my own review of it and it gave 25% fake sooo yeah maybe not a great tool


Maybe you're a bot. ! 🤖 Have you checked.?


I do have to fill out a lot of captchas twice….


I just tried and it gave me 0.02% Real


So what happens if schools use this in practice?If there’s a 90% chance of getting it right (I’ll assume the same percentage for Type I and Type II errors), then for every human essay there’s a 10% chance of a false positive. Maybe you warn students at their first essay to be flagged and then punish them for the second? Well now you have a 1% chance that any two consecutive human essays from the same student will both be wrongfully flagged. 1 in 100 law-abiding students will get wrongfully punished under such a system. This doesn’t seem fair at all. Teachers instead have to actually create new material, material that *actually* demonstrates a student’s knowledge of the subject rather than simply their ability to regurgitate words in a way that the teacher likes.


If someone is cheating with two paragraphs…really?


idk have not cheated and written an essay in 15 years :(


A more useful usecase than you might think


Good to know


Let’s be honest, any AI-written essay is going to be nothing more than superficially-correct drivel. If an educator passes it, they’re phoning it in as much as the author.


Educators phone it in most of the time particularly when correcting essays. I mean even if they're not lazy, look at how much stuff they'd have to put serious thought into. Nobody has the energy for that.


And the AI race begins.


Some kid in my college class got caught using AI to write answers. Teach was not pleased.


On one of the bachelor courses at my uni, it is explicitly stated at the beginning of the semester that for any lecture tests or exams, you can use any source you want apart from another person. ChatGPT came out late during the semester, while there were still lecture tests to go, and the lecturer determined and announced that "You can use ChatGPT if you want to, just please include the exact question formatting and exact answer somewhere so that I can have laugh at the results". He of course doesn't post all the (incorrect) auto-generated answers, but so far I think he's only posted one where the answer made sense, with the surprised exclamation that "oops, chatgpt 4/4. Or, well, let's make it 3/4 thanks to mistranslations and this one painfully incorrect qualifier". And he's still just disappointed that people don't tell him what exactly they asked of the bot. And for the exams, the night before he usually goes "ah fuck it's 9pm already, I should make up some exam questions and then feed them to chatgpt just to make sure."


I openly talk about how I used chatGPT to help me write solutions at work. Specifically I asked it to make some regex patterns for me. I'm not going to try and learn regex on purpose now.


I’m a couple months into my first year and am still blissfully unaware of what regex actually is, but I’ve heard it’s annoying cos it’s just useful enough that you need to use it, but not useful enough to use often so you forget how.


Are you allowed to use chat GPT to solve difficult math problems? When I was taking calc I had to write all the hard problems on the chalkboard in the hallway and wait for Matt Damon to solve it


He should sell it to the colleges, and then every year sell them an “updated version” and tell them that last years is out of date and no longer relevant.


You could always generate the essay and then rewrite it surely


Which is the best way to do it anyways. Using it as a tool to generate a rough draft or edit a rough draft you've written leads to much better results and I don't think should be an issue. Copying it directly without any thought is bad (and leads to some pretty boiler plate feeling results), but saying it shouldn't be used at all is like saying calculators are cheating at math. There is an argument that younger students need to not use it at all because they're learning the fundamentals of writing, but for college students it's just another tool


Time to learn how to not write like an AI


This exists already. The question is, can you make an AI, that changes AI-written content in a way, so that AI-detecting AI cant detect the AI in it? Yes bc that also exists. But.. can you make an AI, that manages to see the AI made changes that makes it impossible for AI to determine if a content is written by AI so it can again determine if it was written by AI? Whag I mean, how far can we go?


At what point does this line become so blurred than there is no fundamental difference between AI-written work and human-written work? We’re trying to categorize writing into “AI” or “human”, but at the end of the day it’s just words on a page. At some point, if we aren’t there already, the AIs will be able to put words on the page that exhibit the same properties that human writing does. Put another way, if I submit “4”, it’s impossible to know if that 4 was created by doing 2+2, or 2*2, and (3+5)/2. It’s fundamentally the same outcome that you would get either way.


You can take the output from chatgpt and feed it into rephrasing ais, works 100%


It’s not that hard when ChatGpt always starts new paragraphs with, “another reason” or “ultimately”


These ai using cheaters are going to get so good at cheating it’s going to make us non cheaters need to cheat to have a respectable essay. It will be like the Tour de France for essays where absolutely everyone is cheating


It would be funny if it just asked ChatGPT if it wrote it


100% predictable. I called that this would happen. If you can make AI write essays, you can make AI detect essays written by AI.


False positive rate? Yeah, there is no way it's good enough. The generator will always eventually beat the discriminator.


I mean... that's kind of how some AIs are trained, particularly ones attempting to simulate being human. [Generative adversarial networks](https://en.wikipedia.org/wiki/Generative_adversarial_network) where you train AIs by pitting them against each other. GPT doesn't use GANs, but other attempts at text generation have done so. It's not super easy though. Eventually I imagine GAN layers will be effective, but we're not there yet.


High false positive and false negative rate. Schools better not start usin' this.


i know someone who went to school with this kid. when the story first broke, i wanted to give him the benefit of the doubt as just a curious programmer trying to build something cool, but apparently the kid's always been a huge whiner and a snitch. so the development actually was probably fuelled by him being mad that one of his peers cheated on an essay. lol


Its a bot vs bot world!


Could you not just have ChatGpt write it for you, then plug it into GPTZero to change it enough so it won't signal as plagiarism?


The funny thing is, in school, plagiarism is a serious issue. But using something like ChatGPT in the real world to get you started on a book outline or something else, nobody will care about. It's a way to get the creative writing started kind of like a writing prompt. You're not copying from anybody. You're using an AI program as a ghost writer. And that's done all over the world by a ton of people. But then again as always, the legal world is severely lagging behind technology. Sometimes it catches up well. Other times they catch up and it's bad. But if somebody is stupid enough to just copy and paste everything from ChatGPT word for word into a book form or a paper, you deserve what's coming to you.


I'm a high school English teacher - all this fear of gpt is just a Luddite approach. Students should be taught how to use gpt and other ai as a tool to generate better writing than they're capable of producing on their own. Just like a calculator, it's a tool and its output is largely determined by the users input. Teaching students to use this tool is the right way to adjust to the new world where it exists.


I am not usually critical of adopting new technology in education, but I honestly completely disagree. I think that crafting an argument and knowing how to appropriately justify it are some of the few things in school that are taught not only for you to “learn how to learn”, but are things most people will likely genuinely need later on. Maybe it won’t be quite in the form of the formal essay, but the essay enforces good practices. So it doesn’t matter if ChatGPT’s outputs are “better”—the point is that in using ChatGPT, you sacrifice the chance to develop your own style of communication (which is critical since if you don’t know how to communicate your own thoughts, your thoughts and perspectives will be filtered and are effectively useless in any group setting). Furthermore, depending on the degree to which it is used, you also lose the core process of writing an essay where you figure out what your own opinion is, why you hold that opinion, and why you don’t hold any other opinion. ChatGPT can come up with opinions, justify them, and come up with flaws of other opinions, but those thoughts are not yours and handing in a paper in which ChatGPT’s writing justifies ChatGPT’s arguments is an exercise in futility.


This. I taught writing for a while after grad school. Using language correctly is one facet of writing, sure, and the one all the grammar weenies get their thrills from. But writing is, at its base, thinking done on paper—creating order out of a pile of facts, establishing relevance, connecting ideas, challenging assumptions by presenting and critiquing evidence, drawing conclusions—and there’s literally no other way to test your thinking than to write it down and submit it for outside review and comment. (Or build something with it and send it to QA.) Is that what happens in the 7th grader’s persuasive essay? Maybe, maybe not. But how often have you not really known what you think about a thing until you’ve written it down, for college or journaling or whatever? To quote Wittgenstein, “Tautologies and contradictions, the propositions of logic, are the limits of language and thought, and thereby the limits of the world.” If we delegate language and thought work to AI, if we don’t learn how to think it ourselves, what then happens to the limits of our world?


With calculators, we still deem there is value in the skills of independent evaluation, hence we are taught the "by hand" methods and calculators aren't always allowed on exams that are intended to test those skills. Does the same not apply to ChatGPT? Technically, using that tool, we could get away with literally never writing professional or essay-like text again, but is that really the best outcome? It can also be sufficiently "creative" for you (it can help with writer's block and write scenarios in a fairly dynamic way), but do you want a society so out of touch with their own ability to construct novel permutations of ideas? These are important consequences to consider.


How do you imagine that working? Do you envision students being able to draft the skeleton of an argument, identify sources for their points, and then use an AI to spin their outline and citations into prose? If that is your endstate, why do we bother writing in the first place? Why not just write the outline and call it good?


It's not just about the result; if a student doesn't write their work themselves they miss a crucial part of learning, and they cannot develop their own style. When you use a calculator, there's only one possible output. ChatGPT et al. have multiple possible outputs for the same input, that alone makes it incomparable to a calculator.


It's going to completely short circuit the point of writing these assignments though.


Smart kill your competition make sure humans write essays so he can pay his tuition


no no no stop it. they will feed it to chatgpt and it will reach singularity


Shit guys- turns out I'm an AI an I didn't even know it. At least according to this guys program at least. ​ Yea- this is going to need some serious development before it can be considered even remotely accurate. I'm not sure what it's rate of false-positives and false-negatives truly is but I fed it 5 essays that I wrote an 4/5 failed as being human written.


I would bully him. Forever


This is the kid that says "what about the homework?" when the teacher forgets.


It’s funny seeing people seriously discuss this program. It’s a project from a college student. It’s almost certainly shit.


Interestingly the gpt developers are trying to make the language generated by chat gpt (and I believe got itself) identifiable as when they train gpt4+ on the internet they don't want to train it on data generated by gpt by accident