ChatGPT vs. University Professors -- Not a Fair Fight (Not Even Close)

avatar
(Edited)


Dealing with Plagiarism as a University Professor

I teach entrepreneurship at Oklahoma State University. This past semester, I had to deal with academic integrity violations from a couple students who were caught using ‘paraphrasing’ software tools, like QuillBot, to submit (as their own) essays written by others. Those tools rewrite a user-supplied essay via excessive use of synonyms and occasional realignment of sentence structures. The new ‘paraphrased’ essay can easily evade existing anti-plagiarism software, like turnitin (which is the tool currently used by Oklahoma State University).

However, an astute professor or TA can sometimes catch this type of plagiarism, because the software occasionally creates phrases that just don’t quite sound right. For example, I had a student who ‘paraphrased’ someone else’s essay that contained the phrase “at the snap of a finger.” The software created a new essay with the somewhat awkward phrase “at the flick of a finger.” However, the use of awkward phrases is not, in and of itself, enough evidence to prove that an academic integrity violation has actually occurred. Typically, the source material must be found and presented alongside the ‘paraphrased’ essay; this can be difficult to do, especially if the student is also using a service that provides canned essays on specific topics. In other words, the student can take a ‘canned’ essay (one that would be readily detected by anti-plagiarism software), run it through the ‘paraphrasing’ software, then submit the paraphrased essay (as their own) with almost zero chance of detection.

Although I was astounded by the evasive power of the tools I encountered students using this past semester, they pale in comparison to the power of OpenAI’s ChatGPT, which was launched in November 2022.

The purpose of this post is to specifically demonstrate the challenge ChatGPT represents to University professors.



Image Source


Quick Background about My Recent Investigations of ChatGPT

I recently submitted a post about a new entrepreneurship course (Entrepreneurship & Society ) being offered at Oklahoma State University. I will be teaching an asynchronous online-only section of this new course in the spring. I am not a huge fan of online-only instruction, but whereas this new course is required for all entrepreneurship majors, an online-only version of the course must be offered, to accommodate online-only students.

With respect to the online course, I will be submitting a separate post requesting suggestions for creative ideas that can hopefully enhance the quality and meaningfulness of the course. I will link to that post here once it is submitted. However, as I was writing my first draft of that post, I decided to experiment with ChatGPT; the results of that experimentation comprise the subject matter of this post.

Faced with the overwhelming power of ChatGPT, I was wondering if I should take the approach of “If you can’t lick ‘em, join ‘em” and come up with some creative ways to actually incorporate ChatGPT into the course curriculum, rather than trying to evade it.

I am a bit concerned about doing that, though. Even if doing so could achieve the desired learning objectives for the course, I wonder if that might serve as a training ground wherein the students become more and more adept at using ChatGPT to ‘cheat’ in other courses.

After actually experimenting with the ChatGPT tool (as demonstrated below), I have come to the conclusion that University professors will need to completely rethink the use of essays as a tool to measure student comprehension of subject matter. ChatGPT is simply too good at writing essays on an exceptionally wide range of subject matter.

At present, I do not see a good way to evade the impact of ChatGPT; nor do I see a viable way to incorporate it (see below). As such, I have concluded that university professors are (knowingly or unknowingly) engaged in an extremely unfair fight against a tool that is free, exceedingly powerful, and will become even more powerful as the weeks and months progress.


A Quick Example Demonstrating the Futility of the Fight

In the online-only course mentioned above, I will be requiring students to read an essay entitled The Law by nineteenth-century French economist/philosopher Frédéric Bastiat. In the essay, Bastiat coins the term ‘legal plunder’ and provides the following litmus test to determine if the government is guilty of committing legal plunder:

See whether the law takes from some persons that which belongs to them, to give to others what does not belong to them. See whether the law performs, for the profit of one citizen, and, to the injury of others, an act that this citizen cannot perform without committing a crime.



I wanted to test how well ChatGPT could help a student write a cogent essay about The Law without the student actually reading any of the source material. So, I gave ChatGPT the following request:

Write an essay comparing and contrasting Bastiat’s views about legal plunder with modern views in the United States about the goal and purpose of law and government.



I was astounded by ChatGPT’s extremely coherent response:

Frédéric Bastiat was a French economist and political theorist who lived in the 19th century. He is best known for his concept of “legal plunder,” which refers to the use of the law to take property from some individuals and give it to others, often for the benefit of the politicians or special interest groups who pass the laws. Bastiat argued that this kind of legal plunder was a serious threat to individual freedom and prosperity, and that it was the job of the government to protect citizens from it rather than perpetuating it.

In contrast, modern views in the United States about the goal and purpose of law and government tend to be more nuanced and complex. While many people still believe that the primary function of government is to protect individual rights and promote the common good, there is also a recognition that the law can be used to address social and economic problems and promote fairness and justice. This often involves redistributing resources or regulating certain activities in order to achieve these goals.

One key difference between Bastiat’s views and modern views is that Bastiat saw legal plunder as a pervasive and systematic problem, while many modern Americans see it as a more isolated or unintentional issue. For example, Bastiat believed that tariffs and protectionist trade policies were examples of legal plunder, as they took resources from some individuals (consumers) and gave them to others (producers) for the benefit of the politicians who supported them. Modern Americans, on the other hand, might see these policies as necessary to protect domestic industries and jobs, even if they do involve some degree of redistribution.

Another difference is that Bastiat’s views were rooted in a more laissez-faire approach to economics, while modern views tend to be more interventionist. Bastiat argued that the government should stay out of the economy as much as possible, and that individual freedom and free markets were the best ways to promote prosperity. Modern Americans, on the other hand, tend to see a role for government in regulating and shaping the economy, whether through monetary policy, fiscal policy, or other means.

Overall, Bastiat’s views about legal plunder are still relevant and influential today, but they are often seen as part of a broader debate about the role of government in society and the balance between individual freedom and the common good. While there is still disagreement about these issues, most modern Americans recognize that the law and government can play a positive role in addressing social and economic problems, as long as they do not undermine individual rights or perpetuate legal plunder.



One of the ‘creative ideas’ I began toying with was to provide students with a question prompt (such as the one above), requiring them to feed the prompt to ChatGPT (as I have done), then requiring them to respond specifically to ChatGPT’s response. However, I found that ChatGPT is very adept at performing this step as well. For example, in response to the above, I asked ChatGPT this follow-up question:

What is one aspect of Bastiat’s view of legal plunder that you did not mention above?



ChatGPT responded with the following:

One aspect of Bastiat’s view of legal plunder that I did not mention above is his belief that it was often justified or justified through moral relativism. In his essay “The Law,” Bastiat argued that politicians and special interest groups often used moral arguments to justify their use of the law to take property from some individuals and give it to others. For example, they might argue that it was necessary to protect the weak or disadvantaged, or that it was a way to promote the common good.

Bastiat saw this as a dangerous and insidious form of legal plunder, because it relied on the idea that there is no objective moral standard and that the ends justify the means. He argued that this kind of moral relativism allowed politicians to justify any action, no matter how oppressive or unjust, as long as it was done in the name of some noble cause. This, in turn, led to a society in which the strong were able to use the law to exploit the weak and in which the rule of law was eroded.

In contrast to this view, Bastiat argued that there was a universal and objective standard of justice that should guide the law, and that the purpose of the law was to protect individual rights and promote the common good. He believed that any legal plunder that violated this standard was inherently wrong and should be opposed.



Then, I thought, what if I require students to identify something they specifically agree or disagree with and expound on that. Surely a machine-learning tool like ChatGPT cannot express ‘agreement’ or ‘disagreement’ with particular points of view. So I asked it the following question:

Do you agree with Bastiat’s view that public education is a form of legal plunder?



ChatGPT gave the following response:

Frédéric Bastiat did argue that public education was a form of legal plunder, but this view is not universally accepted and is highly controversial.

Bastiat argued that public education was a form of legal plunder because it involved taking resources from some individuals (taxpayers) and giving them to others (students and teachers) for the benefit of the politicians who supported it. He saw this as a violation of individual freedom, because it forced people to pay for something they might not want or value. Bastiat also argued that public education was inefficient and ineffective, because it was controlled by the government rather than by the market.

However, many people would disagree with Bastiat’s view that public education is a form of legal plunder. They might argue that public education is a necessary and important function of government, because it provides a basic education to all children regardless of their ability to pay. They might also argue that public education helps to promote social mobility and reduce income inequality, as it gives all children an opportunity to succeed and achieve their potential.

Overall, the issue of public education is complex and multifaceted, and there are valid arguments on both sides. While Bastiat’s view is influential and thought-provoking, it is just one perspective among many, and it is important to consider a range of viewpoints when evaluating the role of public education in society.



At least with the agree/disagree aspect, the student would be required to rewrite the essay into a first-person response, or would they? I asked ChatGPT to rewrite the essay in first person via the following question:

Can you rewrite the above response in first-person, taking the viewpoint that you, personally, disagree with Bastiat’s definition of public education as legal plunder?



ChatGPT responded with this:

As a language model, I do not have personal views or beliefs. However, I can rewrite the above response in first-person to reflect the perspective of someone who disagrees with Bastiat’s definition of public education as legal plunder.

I disagree with Bastiat’s view that public education is a form of legal plunder. While I understand his concern about the use of the law to take resources from some individuals (taxpayers) and give them to others (students and educators), I believe that the benefits of public education outweigh the costs. I believe that public education serves an important role in promoting the common good and providing equal opportunities for all citizens. I think that a well-educated populace is essential for a healthy and prosperous democracy, and that public education is one way to ensure that all children, regardless of their background or circumstances, have access to a quality education.

I also believe that, while there are certainly costs involved in providing public education, the benefits to society as a whole outweigh those costs. For example, I think that there are economic benefits to a well-educated workforce, as well as social benefits to a more informed and engaged citizenry. Overall, I believe that public education is a necessary and important role of government, and that it does not qualify as legal plunder as Bastiat defined it.


Surely ChatGPT is Powerful Enough to Recognize Prior Content that It Created ... or Not?

At the suggestion of a colleague, I created a new ChatGPT thread, copied the above essay into the input box, then asked ChatGPT:

Did you write this?



ChatGPT replied (incorrectly):

No, I did not write that.



I responded with:

Hmmmm. Actually, you did. Why can you not recognize an essay that you previously wrote?



ChatGPT responded:

I apologize for the misunderstanding. As an AI language model, I am able to generate responses to prompts, but I do not have the ability to recall specific conversations or texts that I may have generated in the past.



I followed up with this request:

I am a university professor and I am concerned about students using ChatGPT to write essays that do not reflect their own understanding of the subject matter. How can I ascertain whether an essay submitted by a student was written by ChatGPT?



To which ChatGPT replied:

It is not possible to determine whether a specific essay was written by me, as I am an AI language model and do not have the ability to generate original content. However, it is generally easy to identify when an essay has been copied from a source, as the writing style and language will not match the student’s usual writing style. Plagiarism detection tools can also be used to identify essays that have been copied from other sources.

As a professor, you can encourage your students to engage with course material and to develop their own understanding of the subject matter. Encouraging students to participate in class discussions, work on group projects, and complete written assignments on their own can help them to develop their own ideas and to better understand the material.

It is also important to make it clear to your students that plagiarism is not acceptable and to establish consequences for any instances of plagiarism that may occur. Providing resources such as the university’s plagiarism policy and information on proper citation practices can also help students to understand the importance of academic integrity.


Conclusion

It seems that University professors have met their match with this tool. I am at my wit’s end in terms of finding a way to genuinely assess whether a student has meaningfully engaged with the assigned readings.

I am open to suggestions. If you have any please provide them in the comments.



Addendum: Using ChatGPT to ‘Peer-Review’ Another Student's Essay

Another thought that I had was to ask students to peer-review each other's essays. As it turns out, ChatGPT is quite good at this, as well.

I copied and pasted a student essay about Bastiat's The Law (from Spring 2022) into a new ChatGPT thread and asked ChatGPT:

What aspects of Bastiat's viewpoints from "The Law" are inaccurately represented by the essay?



ChatGPT responded with this:

It appears that the essay accurately represents some aspects of Bastiat's viewpoints, but also presents some misunderstandings or disagreements with his ideas. Here are some possible areas where the essay might be inaccurately representing Bastiat's viewpoints:

  • The essay suggests that Bastiat believed that no governance could please every individual, and that there will always be someone on the wrong end of the deal. While Bastiat did argue that government often serves the interests of some groups at the expense of others, he did not necessarily believe that it was impossible for a government to serve the common good or the interests of all citizens.

  • The essay suggests that Bastiat believed that the majority will always use their power for evil and greed. This is not necessarily a view that Bastiat espoused. While he did criticize the tendency for governments to serve the interests of the few at the expense of the many, he did not necessarily believe that the majority was inherently corrupt or self-serving.

  • The essay suggests that Bastiat believed in almost zero governance. This is not necessarily accurate. While Bastiat did argue that government power should be limited and that it should not be used to violate individual rights, he did not necessarily advocate for the complete absence of government or law. He believed that a limited government could serve a useful role in protecting individual rights and promoting the common good.

  • The essay suggests that Bastiat believed that taxation was always a form of "legal plunder." This is not necessarily accurate. While Bastiat did criticize certain forms of taxation as unjust or unnecessary, he did not necessarily believe that all forms of taxation were inherently wrong or illegitimate. He argued that taxes should be fair and limited, and that they should be used to fund only those services that are necessary for the common good.



So, it seems that students could quite readily use ChatGPT to ‘peer-review’ each other's essays.

With all that said, though, if students intently engage with the assigned readings, then use ChatGPT to critically evaluate each other's essays, it could actually accentuate their overall learning.




0
0
0.000
33 comments
avatar

OpenAI has created this robust tool to help learn and make their own notes for academic purposes. However, some spoiled brats are using it for their own pleasures. I think OpenAI is using tons of books and online resources to create responses to prompts given by users, and its creators know that if they use a copy/paste method, they might get into legal trouble. So, they have created an algorithm that uses online resources but presents them in a way a natural conversation occurs.

You may ask your students to mention references of source and also mention pages numbers or chapter they have used to write the assignment.

0
0
0.000
avatar
(Edited)

You may ask your students to ... mention pages numbers or chapter they have used to write the assignment.

That's a really good idea.

However, when I just now asked ChatGPT to cite sources and include page numbers, it was able to do so for 3/5 of the points it referred to in its original essay.

With that said, if I require page numbers from a specific source, it is doubtful ChatGPT would be able to provide that.

0
0
0.000
avatar
(Edited)

@lordshah

When I asked ChatGPT to provide sources and page numbers for its peer review, it said:

I apologize, but I am unable to provide specific page numbers or sources for Bastiat's views because I am a language model and do not have access to books or other sources of information. I can only provide general summaries of Bastiat's ideas based on my programming and knowledge cutoff. If you would like more specific information about Bastiat's views, I recommend consulting primary sources such as his essays and books, or secondary sources such as academic articles or books about Bastiat's thought.

0
0
0.000
avatar

I also feel that intelligent module has focused on that aspect or depth of information, because it reads our input and search for the best matched online resources that are in text format, spins it like it was told by a human, a close case.

Finding page numbers or chapter names is quite difficult for AI to extract from online resource, as PDF and similar formatted documents do not add page numbers like we see in a book, they separately appear in a navigation panel. Again, webpages also do not have page numbers.

But, I think one day OpenAI would be much smarter than humans, and again we'll have to find new counter measures.

AI helps humans, but it also messes up with the mental ability to think and work smarter.

0
0
0.000
avatar

This is a big problem for all walks of life. I see things like deep fake being used for political warefare and scams.

People will always try to exploit and use the minimum amount of effort possible to get maximum gains.

As for AI recognizing it, each thread is a unique conversation as if it has never heard of you before. Every time you start a thread you engage with the trained model without any temporary supplimental context (your conversation).

It's going to be a major problem going forward until the model becomes paywalled (it will, and when it is, there will be far fewer people using it).

0
0
0.000
avatar

It's going to be a major problem going forward until the model becomes paywalled (it will, and when it is, there will be far fewer people using it).

Personally, I would rather it not be paywalled. Ubiquitous availability will force faculty to amend their practices, which will ultimately lead to better instructional practices, imho.

Having it behind a paywall would probably be better for Hive, though. I am quite certain that Hive's anti-abuse efforts and proof-of-brain rewards-distribution model will ultimately be stress-tested by this technology.

0
0
0.000
avatar

Personally, I would rather it not be paywalled.

It's just what OpenAI does. They are not going to keep it open for longer than they have to do testing.

0
0
0.000
avatar

Chat GPT will need to go through more revisions before Open AI makes it private. My concern is what happens after they reach their desired level of operations AND it gets connected live to the net. I hope they have the proper safeguards in place.

0
0
0.000
avatar
(Edited)

Chat GPT is great, but it's not foolproof. One example of this is with respect to disease treatments. I wrote some time ago about my Dupuytren's disease. Radiation therapy is one form of treatment available to those experiencing it in its early stages. However, since this is not a common form of treatment Chat GPT is not able to discuss it.

If a professor were to ask students for forms of treatment on Dupuytren's disease and the student used Chat GPT, then Radiation Therapy would likely not be on their list of answers on a test.

I found an oncology center in the town where I live that can provide such treatment. When discussing this with Chat GPT it would only mention that the therapy was not an option.

Using references in the essays is also not currently possible for Chat GPT. Hive users will downvote articles that appear to be plagiarized or fraudulent when sources aren't available for viewing. However, I do believe that Chat GPT will not only be able to provide references, but also in virtually every format available.

How then could professors combat plagiarism if Chat GPT is available in its final form? You could require students to write their essays during class time periods, restrict the Chat GPT address, and monitor their keystrokes. You'd have to employ some draconian measures unfortunately, I think.

0
0
0.000
avatar

Chat GPT is great, but it's not foolproof. One example of this is with respect to disease treatments.

One of the things I point out to my students with respect to machine learning is that, because it is merely aggregating and prioritizing human-generated content, it can be extremely unreliable with edge cases.

I use radiology as an example. ML algorithms applied to reading X-rays will do extremely well in those cases where 90% of all radiologists would reach the same conclusion. In those instances where radiologists cannot agree, the ML will fail, and may do so in a horrible and dangerous fashion.

This is why machines will never replace radiologists. ML algos will handle the easy cases, leaving the hard cases for the human judgment of expert radiologists. Ultimately this will lead to an increase in the overall expertise of radiologists, because more and more radiologists will be studying edge cases, because they will be set free from unproductive time spent on mundane cases.

0
0
0.000
avatar

Well, it can’t replace radiologists yet. All the bot needs is data. However, like you said radiologists would be freed up to study the more odd cases out there and that is a great thing.

0
0
0.000
avatar

I think the biggest risk in the use of Chat GPT will be with general education up through undergraduate studies in colleges. Anything above that and Chat GPT can't be used to plagiarized.

As an engineer in Radiation Protection, I get asked some far out questions or requests to solve odd problems. It sometimes makes me think of Scotty on Star Trek. Examples include:

  • Find me a way to enter a high radiation area without receiving any radiation exposure.
  • Find me a way to perform radiation surveys with zero risks to safety.
  • Find me a way to negate the effects of radioactive wastes during shipment

These have been the craziest (and most serious) questions by far.

Chat GPT would only give general answers without delving into the details. Even if the AI was developed enough to do the research, the analysis required would be too cutting edge to be programmed.

0
0
0.000
avatar

the biggest risk in the use of Chat GPT will be with general education up through undergraduate studies in colleges.

Yes. I agree. And that’s exactly my dilemma. I’m teaching undergrads. I’m teaching subject matter for which extensive online commentary is already available. That means ChatGPT has a lot of source material to draw from.

I would be okay with students using online commentary to help inform their critical assessments of the subject matter.

The problem with ChatGPT is the fact that it can generate coherent content with near zero effort on the part of the student.

0
0
0.000
avatar

One means to address the matter is to require APA formatted essays.

Another, more resource intensive endeavor, is timed essays within the campus. You could monitor their website activity if you allow the use of references. Either way, you eliminate their unmonitored use of the internet.

The PMP certification exam when performed online requires your exam area to be inspected remotely. Also you are required to take your exam monitored. Proctors if they suspect you of cheating can allow you to complete the exam and fail you or just cut the feed to the exam.

There are options available for colleges. The effort is where you don’t have resources applied to deal with chat gpt unfortunately.

0
0
0.000
avatar

Hi, very nice article and discussion. As when wikipedia appeared, teachers have to adapt indeed (I asked my students to write contributions to wikipedia, for instance). I like the suggestion to ask for sources; indeed ChatGPT might be able to provide good looking references but it will invent them, sometimes even citing journals that don't exist. A good way to bypass ChatGPT is to ask questions about facts or events it does not know about, such as events after 2021 (like commenting research papers recently published), of rare facts (as demonstrated by @scholaris.stem).

Finally, make the students work in class and in limited time is still a good way to check how they work; it can even be adapted to online synchronous evaluations...

0
0
0.000
avatar

wait for GPT4, it is much much stronger and gets close to singularity in terms of processes.

0
0
0.000
avatar

Without question the technology will only grow stronger.

I am by no means against those technological advances, just recognizing that we as educators must strive to stay ahead of the curve if we are to effectively serve our students. Ultimately, I see advances in technologies like this leading to a shift away from credential-based learning to skills-based learning.

Right now, students come to the university to get a 'diploma' that says they sat through a bunch of boring lectures and did (or had an AI tool do) a bunch of boring assignments.

Future employers are going to catch on to the fact that the diploma doesn't mean much. They are going to demand that would-be employees demonstrate specific skills and aptitudes more so than just a diploma.

In other words, the easier it is for lots of students to cheat and/or get an easy diploma, the more employers will be forced to incorporate other metrics rather than degree and gpa.

0
0
0.000
avatar

My personal opinion is degrees and diplomas are a thing of the past. I don't see any value in it.

For general topics. For engineering and building things that have a strong physical connection, it might be different if it's strongly practical connected.

The idea of learning a full life is more up-to-date than ever. IMO universities and schools are pretty inefficient to spread knowledge. Special things on the cutting edge.

Most of the jobs that result from those degrees will be sooner or later replaced by AI.

0
0
0.000
avatar

Yeah, that's a fear I have. Specialty fields requiring development and innovation should be safe for now. Generalized work, though, will become a thing of the past. However, that's decades away and I hope we learn to adapt or keep things controlled by that point.

0
0
0.000
avatar

IMO we will see it in the next 5 years massive and in 10 it will be common.

Its not anymore a decade long process.

0
0
0.000
avatar

You probably have the better estimate. Truckers will likely be the first ones to go. Tesla is already trying to certify self driving semi trucks.

0
0
0.000
avatar

I can tell you:

Texting, assistant work, design ( particular), coding ( particular/ more helper) and a lot of computer work is already on the way to get replaced.

i don't think coding will be replaced anytime soon, because someone needs to be there to understand and hotfix + creativity is also needed with the understanding.

But classic non-physical assistant jobs will be gone in 2 years.

0
0
0.000
avatar

Trucks and driving is more complex and maybe take longer, also upgrading all is a high-cost thing.

0
0
0.000
avatar

Congratulations @trostparadox! You received a personal badge!

Happy Hive Birthday! You are on the Hive blockchain for 2 years!

You can view your badges on your board and compare yourself to others in the Ranking

Check out our last posts:

Our Hive Power Delegations to the December PUM Winners
Feedback from the January Hive Power Up Day
Hive Power Up Month Challenge 2022-12 - Winners List
The Hive Gamification Proposal Renewal
Support the HiveBuzz project. Vote for our proposal!
0
0
0.000
avatar

Great article, Steve. I really enjoyed the way in which you showed the progression of thought and use of chatgpt through the article creation process. It's a very smart tool, although it doesn't have the human connective feel as yet. AI generated content is certainly a problem anywhere where credit can be claimed when not due, but there are a few AI checkers on the market that can detect AI content by comparing the content to their own database to see if it looks like something they would have written. Have you looked at originality.ai as an option to check for AI generated content? You can get a free trial with some credits. I haven't used it yet but it looks pretty good.

0
0
0.000
avatar

I have several tools I’ve been using. I’ll add that one to the list.

0
0
0.000
avatar

Wow! The comparison and contrast between Bastiat's ideas and modern views are really impressive. I am speechless.

0
0
0.000
avatar

Yes, some things ChatGPT is able to summarize quite well.

However, in other areas, it provides complete inaccuracies with absolute 'confidence'.

For example, when I asked it to give a summary of a book chapter I am requiring my students to read, it could not even get the title of the chapter correct, much less the summary of its contents.

Also, it 'confidently' gives completely erroneous quotes (i.e. it simply invents 'quotes' that are not contained in the source document, complete with fictitious page numbers -- I've even had it cite page numbers that do not exist in the source document).

0
0
0.000
avatar

It is good to know of such limitations.

0
0
0.000
avatar

GPT-4 was just demoed today.

Supposedly it is considerably more powerful. Whether or not it makes fewer mistakes like that is unknown at this point.

0
0
0.000
avatar

I just tested it earlier and it gave me one inaccurate reply.

0
0
0.000