artificial intelligence

How AI Is Changing STEM

Estimated Read Time: 13 minute(s)
Common Topics: ai, data, human, xf, avatar

We asked our PF Advisors “How do you see the rise in A.I. affecting STEM in the lab, classroom, industry and or in everyday society?”. We got so many great responses we need to split them into parts, here are the first several. This is part 2. Read part 1 here. Enjoy!

gleemgleem

AI: Maybe not quite so ready for prime time but it is coming.

I have no particular expertise with AI but I try and follow the development of AI and robotics routinely by checking for significant advances more or less weekly. Many occupations are ripe to be significantly affected by AI as those that are based on executing standard procedures and do not require delicate physical manipulations under varying conditions. I would include robots in this discussion since even if they do not actually employ AI software to perform their tasks they can and will be controlled by AI supervisors.
Before I continue I recommend the reader view the following informative video from DARPA explaining and reviewing the elementary aspects of AI development and capabilities. https://www.darpa.mil/about-us/darpa-perspective-on-ai

The experts are divided on the magnitude of the impacts on technology and society in general particularly with regards to AGI. However, I do not believe AGI is in the cards for the next ten or twenty years. Most of the near term impact will be to take over more of the plethora of rules-based tasks. Most of the AI development is still carried out with traditional computer hardware and software limiting the complexity of information processing. However, recent introductions of AI dedicated processors like the Intel’s NNP chip (Neural Net Processor) introduced late last year will have a major impact on AI implementation, perhaps like LSI had in the semiconductor industry. It is said that the complexity (related to the degree of interconnectedness like synapse) of AI systems is increasing by a factor of 10 each year. https://www.nextplatform.com/2019/11/13/intel-throws-down-ai-gauntlet-with-neural-network-chips/

Similarly, MIT has developed an artificial synapse chip that mimics those of neurons eliminating the need for complex computation and reducing memory requirements associated with software implemented AI. If it continues we can expect to see AI systems as complex as human’s sooner than some predicted.
http://news.mit.edu/2020/thousands-artificial-brain-synapses-single-chip-0608

Let me make some general remarks about some AI issues that have been seen a preventing its extensive implementation. AI has been criticized for only being capable of a single task and not being able to retain that ability if retrained for another. It has been criticized in language applications for not being context sensitive. It has been criticized for containing biases inadvertently ensconced by using biased training data. It has been criticized for being a power glutton. AI is criticized because it is difficult or impossible to determine how AI arrived at its conclusions. How do we verify that the conclusions are reasonable? These problems are being solved. Finally and perhaps the biggest and most difficult to overcome is the acceptance by the general population due to concerns of bias, liability, and misuse, and privacy. Privacy may be the prime concern of most people.

As far as STEM is concerned AI is having an impact on data analysis. There is data that is difficult for humans to assess or analyze quickly. Modern experiments and instrumentation are producing a tremendous increase in data acquisition rates. Processing/analysis must be done quickly and efficiently. AI has had success with analyzing astronomical data for example. The LHC has a huge backlog of data to be analyzed. This is good from the standpoint of accelerating the discovery of new science or technological advances but may lead to a reduced need for humans (grad students?). Some data like frequency spectra are only amenable to simple human processing. Some animals like bats or whales can process complex reflected sound waves to locate/ identify prey or navigate. There is a company that uses reflected microwave frequency spectra to identify weapons of various sorts with reasonable accuracy based on frequency spectral signatures something a human cannot do. Coding is one of the leading IT jobs. More AI is being developed to write programs and develop algorithms. One new app has developed useable algorithms some surprisingly good. This was starting from scratch with no previous examples to direct it.

Considering the impact of COVID-19 on the economy and society in general there has to be a great incentive for businesses to rethink how they can operate with reduced dependence on humans. Keep in mind that one of the greatest expenses and management problem areas of any company or organization is human resources. It has been said that new jobs will be created by AI but I would not be surprised to see that these new jobs can be handled by AI. The FDIC wants to develop a new quarterly reporting system to replace the one used by banks to fulfill government reporting requirements. The last quarter reports were delayed and some incomplete because of COVID-19. One bottleneck is data input/collection often done by humans. AI to the rescue?

A report from MIT https://www.sciencedaily.com/releases/2020/05/200504150220.htm states that each robot replaces 3.3 workers. One bright spot may be in the replacement of bureaucrats who just manage documents or process forms all of which are suited to standard protocols. We might still go paperless.

Previously I thought health care workers would be one of the least affected by AI but again COVID-19 may have a significant impact by dictating the need for reducing patient health provider contact and therefore healthcare workers. Robots delivering needed supplies and meds would alleviate some of the problems. AI health managers for scheduling and managing resources to improve efficiency.

Applications of routine communications with humans via AI as Alexa for general information exchange will become more reliable and acceptable. There is significant progress in this area. Recently I looked at over 900 job classifications along with current employment on a website sponsored by the US Department Of Labor, looking for those that could be impacted by AI or robotics. see (https://www.careeronestop.org/Toolkit/Careers/careers-largest-employment.aspx?currentpage=1) I identified about 40 million jobs that could be impacted in some way, most coming from the top 50 job classifications. That is about 25% of all full-time workers. Outside of manufacturing some of the largest job classes are related to communicating or human interaction in some respect. Cash exchange is disappearing faster especially with mobile payment. Amazon is developing unattended stores. Walmart is trying automated stocking robots, floor cleaning robots, and doing away with cashiers. Any current automated system might be controlled or supervised by AI instead of interacting directly with humans through what I have just found out is a “central manufacturing execution system”. Humans will be the last line of defense to iron out any wrinkle that AI might not be able to handle until it does.

We know that mechanical automation and computerization have impacted many blue collar and low-end white collar jobs. A study by the Brookings Institute has determined that AI in its more esoteric forms will have a substantial impact in white collar jobs at the other end of the job spectrum heretofore not impacted by automation. https://www.brookings.edu/wp-conten…e-affected-by-AI_Report_Muro-Whiton-Maxim.pdf The report give the realtive impact of AI on these jobs. However, the report states that its impact predicts still has a great amount of uncertainty.

Certainly, some recent implementations have been controversial. Microsofts experimental chatbot Tay which was suppose to learn from the web was found to be easily manipulated and corrupted my mischievous persons, like a child raised by bad parents. Microsoft replaced 50 journalists last month with AI to select articles from the media to be featured on their website. The AI mixed up photographs of members of a band called “Little Mix” who are women of color causing a row. AI facial recognition systems are well known for having difficulty identifying people of color. I am confident that this will be resolved in future systems.

AI may not if ever manifest itself directly as in anthropomorphic figures but it will be pervasive. AI may not be 100% reliable (neither are humans) but it is incredibly faster than humans and like any tool that extends human capabilities can be a great advantage especially in business and war.

I am a skeptic regarding the control of nefarious applications of AI. Developments of such applications cannot be controlled if an AI app gives an advantage to the developer. Computer hacking is illegal but it continues unabated.

 

Astronuc

Astronuc

That’s an interesting question. It would help one to understand what AI is.
https://www.accenture.com/us-en/services/digital/what-artificial-intelligence-really
https://www.ibm.com/it-infrastructure/solutions/ai
https://www.ibm.com/it-infrastructure/linuxone

PNNL uses AI in large and small applications. Data analytics or ‘Big Data’ is one area. AI is useful for analyzing big data sets, but it is or the results are only as good as the data and the rules-based engine.

AI is useful for analyzing networks or systems, and is even more useful if it has foresight, i.e., is predictive, anticipatory and/or insightful. If a prediction or insight is wrong, a system may go unstable and damage or failure may ensue. For some cases, the damage or failure may be benign, i.e., consequences are not significant, but if damage or failure results in injury or death of a person or persons, or animals, then that is obviously catastrophic and irreparable.

Microsoft, Google, Facebook and Amazon all use forms of AI. They claim to enhance the experience, but it is more of manipulation IMO.

In science and engineering, I see AI as being useful for dealing with complex problems with many variables, e.g., in one relevant case, would be finding the optimal composition for a complex alloy, such as stainless steel (Fe-based, or specifically Fe-Cr-Ni-Mo-Mn-(C,N)-based). The base element is Fe, but a certain level of Cr is needed for corrosion resistance, a certain level of Mo is needed for high temperature creep resistance and resistance to hydrogen embrittlement, a certain level of Mn and Ni are needed for austenite stability and toughness, and all affect strength in conjunction with levels of C and N, and subsequent thermo-mechanical treatment. There are minor elements, e.g, Si, Nb, V, Ti, Zr, which are important with respect to binding with O, but also with C and N, which act as dispersed strengtheners. Also, there must be low levels of various impurities, notably S, P, then As, Al, B, Cu, Co, Sn, Sb, . . . . which must be minimized in ensure corrosion resistance and mechanical integrity in adverse environments.

The elements can be combined and analyzed using computational chemistry software, e.g., CALPHAD, or other software in order to determine various thermophysical and thermomechanical properties of an alloy. There is ancillary or complementary software, for determining behaviors like corrosion or creep as a function of environment (including stress, temperature, environmental chemistry, . . . ). Such problems get very large, very quickly). https://en.wikipedia.org/wiki/Computational_chemistry

It gets even more complicated if one then takes an alloy and simulates its response in a radiation field, as in a nuclear reactor environment. A neutron flux results in displacement of atoms in the metal lattice while also leading to activation and transmutation of the isotopes and elements, and the gamma radiation induces electron displacements which influences the chemistry on the atomic level.

So AI, if used correctly can be beneficial. But it also could be misused.

The examples of Microsoft, Google, Facebook and Amazon and use of AI involve monitoring websites visited, browsing content, or online purchases made in order to direct advertisements or news/information in order to influence, which after all is the goal of advertising. One could simply be exercising one’s curiosity about something, but the AI does not ‘understand’ one’s motivation. Nevertheless, one will finds advertisements related to one’s search or query.

Misuse or abuse could be in the form of misinformation. For example, AI which pushes a health treatment, in the absence of critical information, on someone who might have contraindications is a misuse, or even negligence/abuse, of AI, IMO.

Proper use of AI requires truth and factual information (accuracy) to function properly.

Remember AI is a tool, which can be used for positive/productive purposes as well as nefarious purposes, which depends on the user and his/her/their motivations.

russ_watters

russ_watters

I have a different view of AI than many people, because I find it to be poorly defined and often science fiction-y, which makes it potentially less valuable or profound than other people think it is… but I think being mundane is what makes it profound. Maybe that’s because I’m a Star Trek (TNG) fan, and the character Data has influenced my view. Data is a human-like android that in most ways far exceeds human capabilities (strength and intelligence), yet nobody would ever mistake him for a human because he can’t understand basic human emotions and irrational thought — he’s too logical. He can run a starship, but can’t successfully deliver a knock-knock joke!? So if AI is a computer program or robot that can pass for human, I say “why bother?” Or further: “why would we want such a limited and flawed machine?”

The “AI effect” is the idea that anything we haven’t been able to do yet is labeled as “AI”. It’s come about because of the problem with Data: things that at one time were thought to be impossible problems for computers to do have been solved, but the result isn’t a computer that can pass for human, it’s just a really outstanding tool. Handwriting and speech recognition/synthesis for example would seem to be important for AI, but are now often excluded because computers can now do them. These are distinctly human functions that give us a personal connection to the computer but aren’t actually that important to what a computer is/does. For example:

Me: “Hey Siri, what is the square root of 7?”
Siri: “The square root of 7 is approximately 2.64575”.

What’s more important here, the fact that Siri’s voice wasn’t quite believably human or the fact that she spat that answer out in about a second (and displayed the next 50 digits on the screen at the same time!)?

So, how do I see the rise of AI affecting us? By being ubiquitous and largely invisible, the way computers are now, but in increasingly diverse and surprising ways. It’s not about pretending to be human and not quite succeeding, it’s about being everywhere we could possibly want it to be and more places we didn’t think of (but some engineer, somewhere, did). If you ever notice a computer in a place you didn’t expect, doing a function you didn’t expect a computer could ever do, that’s AI to me.

  • It’s a thermostat that learns your preferences and analyzes fuel costs to decide what fuel to use and duty cycle to run to maximize energy or cost effectiveness while maintaining comfort.
  • It’s a car that learns you like early shifting and torque rather than revving-up the rpm and adjusts its shift changes accordingly.
  • It’s a TV/dvr that records a show you don’t even know you want to watch yet.
  • It’s a refrigerator that orders the butter and parsley you forgot when you picked-up scallops yesterday, because it knows you always saute them and you’re out.
  • It’s a cloud that guesses that you have coronavirus because you haven’t left your bedroom in 3 days and someone who was at the grocery store at the same time as you last week has an uncle he played poker with who is infected.
  • It’s a social media platform that guesses you want to be outdoorsey because you’re more likely to “like” a post showing camping and hiking than movies and bowling, but it knows you aren’t because you actually go to the movies and bowling far more often….so it shows you advertisements for movies, not tents.

Yeah, those last few show “AI” can be intrusive if you are of a mindset where you you find surprisingly intimate applications of its knowledge and “thinking” unsettling. But the upside is much bigger, and the “internet of things” and “smart” …everything… is changing our lives for the better in so many ways we’ve barely even thought of yet.

 

jambaugh

jambaugh

I think the recent media hype on AI is a bit overblown. We are a very long way before we will achieve systems which can handle the conceptual understanding as the human brain does. This is not to say we haven’t made great strides in making, e.g. neural net models more practical and useful. Specifically for pattern recognition and category selection.

I can’t say as to how this research will affect STEM et al but I see how it might. In education there is great potential for improvement in automated learning. However the current trend has been to bend the learner to fit the computerized instruction rather than the reverse. I think this is one area of application where AI research can push to define the new questions that must be answered to move forward. Something like how to automate the teacher’s roll in recognizing why a student made a particular mistake and how to change exposition to more efficiently remediate the student’s erroneous conceptualization.

Neural networks, even with the newer recurrent ones are still deterministic machines. Once trained their outputs can be coded in a direct algorithm. So in once senses they are just another form of coding, one that uses a brute-force method of training. That training can be automated so it is efficient in that sense but the resulting “code” is the obtuse to the programmer and unexpected (principally negative) consequences will arise as we invest too much trust in these hidden algorithms.

I think we are still a few paradigm shifts away from true AI in the sense it is being portrayed today. You see this if you carefully listen to Siri and Alexa and the other voice recognition systems and realize that they never really learn anything in terms of individualized actions. They encode aggregate knowledge based on all responses without the ability to be truly interactive at the level of encoded meaning. This is why they are run on centralized servers and this is why they cannot adapt to individual users other than selecting a set of hard wired finite customization options.

So my prediction is a future of mild disappointment in the “promise of AI” over the next decades until some epiphany leads to another transition in paradigms. Of course such events are wild cards. Their accurate prediction is tantamount to their actualization.

Comment Thread

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply