Construction Project Management Using Artificial Intelligence (AI)

 

 

Table of contents

Introduction

The term ‘Artificial Intelligence was first coined in 1956 by prominent computer and cognitive scientist John McCarthy, then a young Assistant Professor of Mathematics at Dartmouth College, when he invited a group of academics from various disciplines including, but not limited to, language simulation, neuron nets, and complexity theory, to a conference entitled the ‘Dartmouth Summer Research Project on Artificial Intelligence’ which is widely considered to be the founding event of artificial intelligence as a field. At that time, the researchers came together to clarify and develop the concepts around “thinking machines” which up to this point had been quite divergent. McCarthy is said to have picked the name artificial intelligence for its neutrality; to avoid highlighting one of the tracks being pursued at the time for the field of “thinking machines” that included cybernetics, automata theory and complex information processing. The proposal for the conference stated, “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

Today, modern dictionary definitions credit Artificial Intelligence as a sub-field of computer science focussing on how machines might imitate human intelligence — being human-like, rather than becoming human. Merriam-Webster provides the following definition: “a branch of computer science dealing with the simulation of intelligent behaviour in computers.”

Save your time!
We can take care of your essay

  • Proper editing and formatting
  • Free revision, title page, and bibliography
  • Flexible prices and money-back guarantee

Place an order

document

The term ‘Aritifical Intelligence’ has been overused in recent years to denote artificial general intelligence (AGI) which refers to self-aware computer programs, capable of real cognition. Nevertheless, most AI systems, for the foreseeable future, will be what computer scientists call “Narrow AI,” meaning that they will be designed to perform one cognition task well, rather than “think for themselves”.

While most of the major technology companies haven’t published a strict dictionary-type definition for Artificial Intelligence, one can extrapolate how they define the importance of AI by reviewing their key areas of research. Machine learning and deep learning are a priority for Google AI and it’s tools to “create smarter, more useful technology and help as many people as possible;” from translations and healthcare, to making smartphones even smarter. Facebook AI Research is committed to “bringing the world closer together by advancing artificial intelligence” whose fields of research include Computer Vision, Conversational AI, Natural Language Processing, and, Human & Machine Intelligence.

IBM’s three main areas of focus include AI Engineering, building scalable AI models and tools; AI Tech, where the core capabilities of AI such as natural language processing, speech and image recognition and reasoning are explored and AI Science, where expanding the frontiers of AI is the focus.

In 2016, several industry leaders in Artificial Intelligence including Amazon, Apple, DeepMind, Google, IBM and Microsoft joined together to create Partnership on AI to Benefit People and Society to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society. Those working with AI today make it a priority to define the field for the problems it will solve and the benefits the technology can have for society. It’s no longer a primary objective for most to create AI techniques that operates like a human brains, but to use its unique capabilities to enhance our world.

Algorithms use a large amount of data to adjust their internal structure such that, when new data is presented, it gets categorised in accordance with the previous data given. This is called “learning” from the data, rather than operating according to the categorisation instructions written strictly in the code.Imagine that we want to write a program which can tell cars apart from trucks. In the traditional programming approach, we would try and write a program which looks for specific

Order this paper