Site Loader

Artificial Intelligence Past and Present
Zachary Haupt
Professor AdankColumbia College – CISS 302

My term paper, Artifiical Intelligence Past and Present, will provide a brief history and evolution of artificial intelligence. It also provides examples of various career fields using artificial intelligence, such as the automobile industry, healthcare industries and teaching. The pros and cons of artificial intelligence from various sources will be addressed. Overall, you can see how artificial intelligence is shaping our world with how it’s changing the way we communicate and interact with the changes to innovation in technology.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

Artificial Intelligence Past and Present
Since the beginning of the development of artificial intelligence the world has been able to see the ways we have made strives to innovation in technology. Innovations in technology have brought about change which have begun to replace personnel and has brought us keys to newer developments in the information technology career fields that have helped with the changes in how we communicate and interact with new technology. Technology has structured ways that we have been able to develop effective means of communication, because the public has needed newer ways to provide research on topics which have not been developed. Artificial intelligence is a topic which does get discussed often but the pros, cons and possibilities may outweigh the possibility to change the ways we have seen the world. Could artificial intelligence be able to replace teachers, doctors, and other personnel which have been shown to provide them with teaching, hospitalization and at home. This topic will suggest that the future may hold us with the possibilities that the public may transform their homes, lives and how hospitals may become beneficial to technology.

The history of artificial intelligence was originally created by John McCarthy who was a Computer Science major that developed the idea which was constructed at “Dartmouth Conference” in 1956. He is the father of artificial intelligence and has done numerous research by bringing about a new area of expertise under the Computer Science career fields and has begun many projects which would become the first logic programming language for the information technology career fields and would be used for artificial intelligence until new programming languages would be invented. John McCarthy invented Lisp in the late 1950s. Based on the lambda calculus, Lisp soon became the programming language of choice for AI applications after its publication in 1960 (Kirwan, 09/04/2016). Lisp is an invention which started the first logic programming language and would be used by in the information technology career fields and begun the newer development in other areas which would become part of artificial intelligence. In 1961, he was perhaps the first to suggest publicly the idea of utility computing, in a speech given to celebrate MIT’s centennial: that computer time-sharing technology might result in a future in which computing power and even specific applications could be sold through the utility business model (like water or electricity) (Garfinkel, 1999). This idea of a computer or information utility was very popular during the late 1960s but faded by the mid-1990s. However, since 2000, the idea has resurfaced in new forms (see application service provider, grid computing, and cloud computing) (Garfinkel, 1999). Some applications have evolved from the creation of artificial intelligence because they have developed better products with newer languages which have brought about newer innovation. Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal (McCorduck, 2004). In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 60s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems (McCorduck, 2004). A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog (Crevier, 1993). Prolog uses a subset of logic (Horn clauses, closely related to “rules” and “production rules”) that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum’s expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition (Crevier, 1993).

The evolution of artificial intelligence has shown considerable results because the number of changes that have suggested that they could find better ways to bring about products. However, the dawn of smarter, more adaptive machines means that specialization will not equate to limited use cases. New technological capabilities will create new models of ownership and asset sharing. Think about the different ways a smarter mobile space can be used: for personal transportation by day, automated delivery by evening and as a habitat overnight — all on the same drivetrain. An expert system can answer questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts. The earliest examples were developed by Edward Feigenbaum and his students. Dendral, begun in 1965, identified compounds from spectrometer readings. MYCIN, developed in 1972, diagnosed infectious blood diseases. They demonstrated the feasibility of the approach (Mccorduck, 2004).
Some benefits of artificial intelligence have been proven to show that the amounts of information have become a hot topic for those working in the artificial intelligence department in the fields of technology. Information technology is the only field which has shown transitions in how we see innovations in technology because the public has shown some inconsiderable results. The inconsiderable results have become topics which have been very debatable since the providers have brought about innovations in technology since we have become familiar with the beginning of artificial intelligence coming out to the public. The normal questions which have been discussed are that artificial intelligence is dangerous to the public because it has shown us that it has not been fully integrated into technology by the makers who have produced it. Overall, the perception of AI’s success is positive: Besides the 71 percent of respondents that believe AI adoption is inevitable, 76 percent of senior decision-makers agree that AI is fundamental to the success of their organization’s strategy. Many of them also expect AI to have a significant effect on their bottom line: By 2020, they expect AI to be contributing a 39 percent average increase in revenue and a 37 percent average cut in operating costs (Hand, January 18, 2017). Some companies have suggested that artificial intelligence is the most important aspect for the use for innovation, but certain entities have suggested it is not safe and that more procedures should be considered to bring more artificial intelligence to the public with considerable results. The dangers may outweigh the benefits of suggesting that information technology with robotic engineering state that the businesses are considerably safer than methods which have will be mentioned.
Artificial intelligence is still new to the public in some considerable measures, but the public still needs to see more productions being made to be fully integrated into society. To that end, a large majority of the businesses surveyed plan to invest in skills development for their workforces. In 80 percent of the cases where companies are replacing roles with AI, they are hanging onto those workers by redeploying or retraining them, the study found. This varies by industry, led by fast-moving consumer goods (94 percent); aerospace and automotive (87 percent); energy, oil and gas (80 percent); and pharmaceutical and life sciences (78 percent). Geographically, China expects to make the greatest investment in its workforce (95 percent) (automation World).

There have been considerable results which have shown that artificial intelligence is beneficial to society, because businesses have been shown to bring about faster production and assembly lines to the manufacturing companies but the amount of work being done by mechanical operators is being replaced by faster technology. Faster technology has been shown to bring benefits to the public because more innovative products are being considered to the public. Innovation in technology is something that has brought about some advantages to the users because technology has made it easier for the public to make transitions in life. Innovation from artificial intelligence could become the most considerable change because we have been made to believe that change helps shape the world. We also still consider artificial intelligence to be the most heated topic for discussion based upon the user which should learn newer technology being produced since it is a new topic that people do not solely know about. Businesses have been using artificial intelligence because it is the most important aspect to produce the most effective results to the public to replace services which have been changing based on innovation to technology. What artificial intelligence has shown to the public is the amount of changes have benefitted other areas of expertise whom have transitioned into using more innovative designs. Information technology has shown to bring considerable results and is a career field which has made many changes since its existence. Even programming is another topic which is being changed to become more in-line with helping to make robotic engineering possible, because the changes are needed to have holographs which control robots, devices and other forms of technology. Technology is transitioning into artificial intelligence designs and translations are being made into having an appearance which have become more transparent to the users.

The manufacturing industry was one of the first industries to implement artificial intelligence into their own respective fields. They have made strides with using robots to build automobiles which has cut down the time it takes to make automobiles and other industries have given it much notice since it is easier to make transitions into using innovative ideas. The innovative ideas have suggested that the amount of changes have brought positive results to the companies which have shown us how to use technology and why it matters to the members who decide to bring those changes to the automobile industry. Tesla founder Elon Musk speculates that artificial intelligence will surpass solely human-based efforts by the year 2030. In a Twitter response to a research paper that hypothesizes the human race will be overtaken by AI-equipped robots by 2060, Musk mused, “Probably closer to 2030 to 2040 in my opinion.” People frequently cite but critically underestimate the speed at which machine learning is helping data sets refine accuracy and displace user-based error to deliver clean and distraction-free processes safely. The speed at which these efficiencies are turning into improvements in our everyday lives is not something we’re having to wait long to experience at all — it’s happening faster than many continue to project (Prescher, June 24, 2017).

Those which belong to the artificial intelligence career fields and information technology career fields have been taught to make new strides in technology since we have been shown that artificial intelligence could be messing with mother nature and other topics which would be shown to help us. One topic suggest that artificial intelligence is doing gods works because the normal has suggested that we need newer innovation in technology to keep up with the world. The world and technology has been changing since the creation of information technology being invented and brought into the business world. We have been seeing changes in how we comprehend the changes being made to innovative technology which have been possible by bringing the best artificial engineers and programmers into the technology world.
Some advantages which have changed the ways we have seen artificial intelligence include being able to provide more innovation to the automobile industries, to manufacturing companies which have used robotic engineering to replace personnel. The replacement of personnel have shown that artificial intelligence suggest that we will be able to translate the ideas that technology shapes the world into becoming a safer place because which have brought about changes We have seen numerous companies change their perspectives on artificial intelligence because they have found too many advantages which have shown them quicker and more innovative ideas to complete their work. Stephen Hawking is the most renowned scientist who has the most extensive knowledge about artificial intelligence and has provided the most research on topics which have been known to help the world. Some articles which have been written by Stephen Hawking have suggested that it is inevitable or unknown today about the presence of how well artificial intelligence will change and replace mankind or how well the future will be with technology becoming faster at the fingerprints of the user. The robotic engineers still do not know about how well innovative technology is able to create large factions of data because there have been pros on replacing humans with different forms of artificial intelligence. Researchers have contemplated that artificial intelligence may begin to be used in the classrooms and soon replace teacher’s, but they have not begun to test the theory that humans may be replaced. Others have suggested that it will be used in the classrooms to replace teachers such as helping to replace teachers to control the cost that the public has wished. Nevertheless, artificial intelligence brings speed and low-cost solutions that could substitute for teachers. Repetitive or administrative tasks like scheduling and lesson planning are obvious candidates for artificial intelligence assistance, so intelligent computers could be used to reduce the amount of time teachers spend doing things like marking and searching or organizing lesson content (AI). School teachers could benefit from the assistance of artificial intelligence in classrooms, according to Erskine Visiting Fellow Professor Benedict du Boulay. “Artificial intelligence could be a huge help to teachers in the classroom. I don’t see a future where we have replaced teachers with computer’s, but I think there are times when a teacher is trying to divide attention between a lot of kids and specially designed computer programs can help fill that gap,” he says.

Other advantages have been brought to discussion about artificial intelligence includes the use for doctors to find areas in their expertise to cure things such as diseases or finding medications. Some experts have recognized that artificial intelligence could find diseases faster because the technology is more advanced now than using older methods such as using heart rate monitors which are being used to test a person’s health. The third method of interaction is somewhere in between. While detecting heart rhythms requires an electrocardiogram (ECG), these sensors can be incorporated into cheap wearable technology and connected to a smartphone. One method which has been used to test for diabetes in the medical career field, it does this by checking and seeing if a patient has any damaged blood vessels. It was trained to recognize the leaky, fragile blood vessels that occur at the back of the eye in poorly controlled diabetes, and artificial intelligence is now working with real patients in several Indian hospitals (Luke Oaken-Rayner). The first of these three ways is the most traditional and will occur where specialized equipment is needed to make a diagnosis. You will make an appointment for a test, go to the clinic and receive a report. While the report will be written by a computer, the patient experience will be unchanged. Other methods will be discussed because the medical fields have utilized the need to find faster ways to help ensure the public can find safer ways to provide for their families and to find advancements to cure the medical problems in their fields. Cardiologists are very good at their jobs, but they’re not infallible. To determine whether something’s wrong with a patient’s heart, a cardiologist will assess the timing of their heartbeat in scans. (Leary, January 3, 2018). According to a report by BBC News, 80 percent of the time, their diagnosis of various heart problems is correct, but it’s the remaining 20 percent that shows the process has room for improvement (Leary, January 3, 2018). It has also shown reasonable benefits by becoming better at diagnosing diseases which have been studied by doctors in the cardiology career fields and has become more accurate at finding their problems. To that end, a team of researchers from the John Radcliffe Hospital in Oxford, England, developed  HYPERLINK “” “_blank” Ultromics, an artificial intelligence diagnostics system that is more accurate than doctors at diagnosing heart disease. Cardiologist utilize artificial intelligence which have been shown to help them diagnose changes in their patient’s health and to provide them with clinical results that suggest they could manage an individual’s medical problems. Ultromics was trained using the heart scans of 1,000 patients treated by the company’s chief medical officer, Paul Leeson, as well as information about whether those patients went on to suffer heart problems. The system has been tested in multiple clinical trials, and Leeson told BBC News it has greatly outperformed human cardiologists. The specific results of the Ultromics trials are expected to be published in a journal later this year (Leary, January 3, 2018). According to recent statistics, there are as many as 500,000 upper-limb amputees in the U.S., with 185,000 new amputations every year. An AI-driven approach can allow people with amputations to detect things with a prosthetic more like a real hand and transform their lives (Zaidi, October 3, 2017). The blind benefitted from the use of using applications which have shown inevitable research which suggest that it could help to cure people whom are considered visually impaired. A similar app called  HYPERLINK “” EyeSense is also making strides in providing independence to the blind and visually impaired. The unique design is powered by advanced computer vision and AI techniques that interpret the visual world and describe it out loud for you. Also, there is already a solution for monitoring whether patients are taking their medications for real. The AiCure app supported by The National Institutes of Health uses a smartphone’s webcam and AI to autonomously confirm that patients are adhering to their prescriptions, or with better terms, supporting them to make sure they know how to manage their condition (Zaidi, October 30, 2017). This is very useful for people with serious medical conditions, for patients who tend to go against the doctor’s advice and participants in clinical trials. Doctors could benefit from using artificial intelligence because the amount of data could be beneficial to the public and could provide patients with information on new prescription medication being invented. With the amount of data available to physicians today—from information about disease symptoms to new drugs, interactions between different drugs and how different people treated in the same way can have very different outcomes—the ability to access and digest information is fast becoming a required skill. And, it’s one that machine learning is uniquely designed to master. “Doctors are realizing that if they want to make sense of massive amounts of data, machine learning is a way of allowing them to learn from that data,” says Francesca Dominici, a professor of biostatistics at the Harvard T.H. Chan School of Public Health and co-director of the Data Science Initiative at the university (Parks, 2017).

There have also been numerous discussions whether using artificial intelligence will place a significant role on the human population because it plays some considerable risks on the development of teachers in the classroom. Teachers have been a widely discussed topic regarding being replaced by a cheaper alternative by the state and federal government which have mentioned the existence of artificial intelligence being used in the classrooms. Automation has affected nearly every industry. More than 47% of U.S. employees are at risk of computer automation, according to an Oxford University study and teachers are no longer exempt. While many of the supporting roles in education, from librarians to bus drivers, are facing the imminent threat of being replaced by computers, teachers are being told that they are not being made redundant by technology just yet. Teaching does not have many components which naturally lend themselves to automation, only some 10-20% of teacher’s time according to a recent McKinsey report (AI). The role of using artificial intelligence in the classrooms would put teachers out of business and help to create benefits to schools based on cost but it would not allow the user to promote the use of teachers making up personnel. The Clayton Christensen Institute, a nonprofit, nonpartisan think tank dedicated to improving the world through disruptive innovation, recently released a new report, “Teaching in the Machine Age: How Innovation Can Make Bad Teachers Good and Good Teachers Better.” The report explains that artificial intelligence machines will not replace teachers, but rather allow schools to address three challenging situations: when schools lack expert teachers, when expert teachers must tackle an array of student needs, and when expert teachers need to teach more than academic content. Ideally, students would be taught using methods that best help them learn (AI). With this method teachers would significantly help to improve classrooms, but it would reduce the cost on the number of teachers needed to provide teaching opportunities and putting numerous employees at risk of unemployment because innovation has replaced old methods which were once suited for basic development. They have used SIRI to help students learn faster for special needs students in the classroom and other forms of artificial intelligence which have replaced some methods such as using books in the classroom in some locations. The results from schools which have gone for virtual classrooms seem are also disappointing. The National Study of Online Charter Schools, the first major study of this growing sector, has taken a wrecking ball to the idea that pupils learn as effectively in such an online setting. Despite the digital glitz, it concludes that online learning has failed to match the teacher at the front of the class. The report, from researchers at the University of Washington, Stanford University and the Mathematica policy research group, found online pupils falling far behind their counterparts in the classroom. In math’s, it was the equivalent of pupils having missed an entire year in school (AI).
There have also been cons which have been suggested by researchers confirming there are not enough evidence to suggest that artificial intelligence is useful to help the need to provide any career fields. Despite having positive life changing applications, the negatives mean that artificial intelligence is not fully trusted. Elon Musk, known for disrupting the automotive market with Tesla, has been accused of fear mongering but he seems to genuinely worry that artificial intelligence could go rogue and lead to the destruction of humanity. Bill Gates says we shouldn’t panic over artificial intelligence, but we do need to ensure that artificial intelligence technologies are implemented properly. And that means ensuring the technologies are ethical. Some believe that it is unethical for those to use artificial intelligence on providing to solve real world problems because certain views may state that they do not allow the use or purpose of artificial intelligence. Unfortunately, there is a serious problem with creating ethical artificial intelligence technologies. There is no single view of ethics and morality that spans all countries, cultures, religions, and people. Cultural norms, which greatly impact morality, vary widely from country to country and group to group. Another view point suggests it is not formidable to those in the automobile even though there have been accidents in the robotic side of artificial intelligence in the automotive industry.
There could be disadvantages in medical career fields which have suggested that artificial intelligence has been shown to not have been accurate in innovating newer ideas to prevent diseases. An additional disadvantage to using smart medical technology is the fact that robots are completely logical and do not contain code to feel empathy.  In this respect, human capabilities far outreach those of computers.  One of the most important aspects of a physician’s job is the patient interface.  Doctor-patient interactions are imperative to establishing a connection, and therefore trust, and provide individual comfort to patients.  Artificial intelligence is heavily reliant on logic, as everything in its code is cut and dry–allowing it to function with little error (TechandMed). While this logic permits computers to surpass human abilities in certain practices, it restricts robots from connecting with patients on a personal level.  Furthermore, it constrains computers’ abilities to take calculated risks to preserve human life like doctors can, and instead makes them dependent on what the logical and most accurate decision should be.  In certain instances, this could mean that the computer does not attempt to save a patient’s life, whereas a doctor would do as much as possible to save the life (TechandMed). Another controversy is cost effectiveness of technology implementation because the cost to implement a new product or design is going to cost more money while using different technology to provide customers with services. There exists a potential restriction on investment in new health care technology.  With more advancements in medical technology comes more spending on health care in the U.S.  Some studies suggest that the demand for certain surgeries is directly caused by purveyors of the surgical technology (TechandMed).  This can lead to the preference of an expensive procedure even if there is little to no benefit to the patient.  To evaluate the cost per patient, executives of health systems must determine if the costs of investing in various new medical technologies will increase or reduce the other factors such as the amount of time a patient will stay in the hospital post-operation or the number of visits that a patient will have to make to a physician’s office. Some individuals believe that artificial intelligence in the medical community suggest that it would make workers lazy based on previous studies from other industries which have implemented it into their areas of expertise suggesting it makes workers more confident to work computers while the workers are not able to perform tasks. There are obvious problems here. A system is only as good as the data it learns from. Take a system trained to learn which patients with pneumonia had a higher risk of death, so that they might be admitted to hospital. It inadvertently classified patients with asthma as being at lower risk. This was because in normal situations, people with pneumonia and a history of asthma go straight to intensive care and therefore get the kind of treatment that significantly reduces their risk of dying. The machine learning took this to mean that asthma + pneumonia = lower risk of death (Nogrady, 11/10/2016).

Some implications which have been suggested state that it is not beneficial to be using artificial intelligence based on previous experiences because people become dependent on technology. Members have become considerably lazy base on the development form artificial intelligence which have shown them faster ways to finish designs, projects and to manufacturing companies. Software programs need regular upgrading to adapt to the changing business environment and, in case of breakdown, present a risk of losing code or important data. Restoring this is often time-consuming and costly. Artificial intelligence involves giving machines and programs the ability to think like a human. Businesses are increasingly looking for ways to put this technology to work to improve their productivity, profitability and business results. There have been considerable ethical concerns which have shown that artificial intelligence has the potential of automation technology giving rise to job losses. People become reluctant in learning how to spot problems because they are relying on using artificial intelligence and they place to much emphasis on using it instead of using the machine to help them figure out ways to provide newer ideas. Rather than worrying about a future artificial intelligence takeover, the real risk is that we can put too much trust in the smart systems we are building. Recall that machine learning works by training software to spot patterns in data. Once trained, it is then put to work analyzing fresh, unseen data. But when the computer spits out an answer, we are typically unable to see how it got there. The computer can only do what the people whom are teaching the system to work on solving real world problems and machines do not provide the best answers for logical answers (Nogrady, 11/10/2016).

Artificial intelligence could become the best thing to bring about innovation in technology and to the public because the vast amount of research has shown that it is the fastest way to bring about changes. The changes which have been made suggest that the users are also able to adapt to the number of suggestions made to the information technology career fields based on the ideas which brought us applications which make it easier to navigate, produce and to build bridges with the public. What could happen with artificial intelligence is it will soon begin to help teach students quicker because it can be run from technology and has made it easier to use based on the notions that we could change the viewers perspective on bring faster results. The results would mean that assembly lines at an automobile industry are run by robots while being monitored by people in the manufacturing industry; medical personnel could solve ways to cure diseases and help with medicines which are being used by patients. Information technology has brought us a new career field that has made it easier for everyday life. There will also be more innovative artificial intelligence projects which arise and be brought to the public based on the results that have been utilized by the public, companies and to those that believe innovation is the key to success. Success from artificial intelligence grows and brings transitions to the information technology career field, because the information technology career field is always adapting to change while bringing success to those which will find it appealing to use. It has been proven to be the most effective thing since it had been developed by John McCarthy in the 1960’s and will see strides with newer technology adapting to change. Any member in the information technology department will see the changes that have brought them results based on the amounts of data which have been given to them and what they believe will be the most innovative thing possible. We will see the public compare artificial intelligence to that of the first computer invented since the public likes to see changes to technology and to see newer items being brought which could help to change their results. What I believe will happen is the amount of information being used will help to bring results to the users and will provide us with the most accurate descriptions for services. Services have been used based on the amount of changing technology being used and what we will see from the information being used for our own lives.

McCarthy, John (1960). “Recursive Functions of Symbolic Expressions and Their Computation by Machine”. Communications of the ACM. 3 (4): 184–195. doi:10.1145/367177.367199.

Garfinkel, Simson (1999). Abelson, Hal, ed. Architects of the Information Society, Thirty-Five Years of the Laboratory for Computer Science at MIT. Cambridge: MIT Press. p. 1. ISBN .

Nogrady, Bianca. “Future – The Real Risks of Artificial Intelligence.” BBC, BBC, 10 Nov. 2016,
N/A. “Surgery and AI Disadvantages.” Technology and Medicine, TechandMed, 17 Dec. 2016,

Dcomisso. “Risks and Limitations of Artificial Intelligence in Business.”, NIBusinessInfo, 11 Dec. 2017,

McCorduck 2004, p. 51, Russell ; Norvig 2003, pp. 19, 23
Park, A. (2017, October 06). Machine-Learning Programs Help Doctors and Their Patients. Retrieved April 20, 2018, from, B. (2017, November 03). John McCarthy (American computer scientist) was born on September 4, 1927. Retrieved April 21, 2018, from

Sharma, Kristi. “Everyone Is Freaking out about Artificial Intelligence Stealing Jobs and Leading to War – and Totally Missing the Point.” Business Insider, Business Insider, 20 Nov. 2017,

Leary, K. (2018, January 03). AI can diagnose heart disease and lung cancer more accurately than doctors. Retrieved April 21, 2018, from, A. (2017, January 18). The Natural Progression of Artificial Intelligence. Retrieved April 21, 2018, from

Post Author: admin


I'm Erica!

Would you like to get a custom essay? How about receiving a customized one?

Check it out