How to Compete With a Philosophy Artificial Intelligent Program
Not long ago, I was playing around with some philosophical software, it’s pretty cool and it contains at least some artificial intelligent algorithms. It works similar to a chat bot, but it also has all of the famous past periods of philosophers, all of their works, and it can search through them with keywords. When you make a statement, comments, or pose a question it searches its own database rearranging the words, creating derivative comments as if you were speaking to a former famous philosopher. That’s pretty interesting isn’t it?
It might be difficult for a student of philosophy to compete with this, because they don’t perhaps know dialogue or discussions based on all these famous former works. A PhD philosopher could talk all day with something like this, but they perhaps could not defeat the philosophical software, because many of these philosophical conundrums run around in circles. In other words it can keep someone busy for hours. In fact in doing this it provides anyone looking at that information a psychologically profile of the individual whose playing with the machine.
Nevertheless, someone can actually compete with this machine, and cause it to start making enough mistakes where its arguments are easily invalidated, in which case the person or human being using their organic mind would be deemed to have won the argument or conversation. Now then let me explain how I’ve done this, how I’ve beaten the machine at its own game. Now the machine doesn’t know that it’s lost the argument, but a PhD philosophy professor could readily see that the machine has been beaten.
How is this possible you ask? Well you must understand that that machine doesn’t actually think, and whereas it might have some components or some artificial intelligent theory behind how it works, it’s still not thinking the way humans think. Now then a good portion of the existential philosophy, or even Plato’s philosophy that came much the prior in human history, the AI program does handle nicely, but you can really screw up the machine by talking about current science fiction movies, the future of private space flight, modern jet airplanes, and social networks.
You see, no one of that has programmed all that into the machine, rather it has taken encyclopedia works, and perhaps the famous philosophical works from the Oxford library. When you bring up new paradigms and events to use as examples the machine fails. Now then, in understanding this, we now know a way to fix this problem. What you do is you send this machine out talking to lots of current day college students, allow them to give “client generated metaphors” and relate past philosophy to current events, and you take what they’ve said and program it into your computer.
In doing this you provide the artificial intelligent computer system the final component it needs to actually pass the Turing test with even someone of an IQ of let’s say 140+. Now that’s something that I would consider approaching true artificial intelligence, and therefore, it will have passed the grade. Currently it does not and it can be defeated with the reason, strength of your arguments, and modern day examples to defeat those same arguments presented by past period philosophers. As it fails, it’s really obvious that it’s not ready for the big league yet. Indeed I hope you will please consider all this and think on it.