artificial intelligence

How AI Will Revolutionize STEM and Society

Estimated Read Time: 12 minute(s)
Common Topics: data, ai, avatar, xf, user

We asked our PF Advisors “How do you see the rise in AI affecting STEM in the lab, classroom, industry, and or in everyday society?”. We got so many great responses we need to split them into parts, here are the first several. Enjoy!

Ranger Mike

Ranger Mike

If things run true as before, and I have seen no vast improvement in the correct forecasting of future trends from these areas, I see lots of money going in these areas but not much usable product coming out. I chose not to dwell on such predictions like back in the early 1980s when we were told the factory of the future would be a lights out manufacturing trend with only a few humans doing maintenance to keep the machines running. That and America being a service economy only in future did not prove true nor did Japan taking over from the USA as the best economy. Such predictions like SPC being absolute requirement and Deming’s philosophy being dominant sounded good in the Harvard business school but never became reality. A.I in my opinion is just the latest buzz word to replace Robotics, Going Green, Sustainable and the other buzz words out there. Remember the movie “ The Graduate” the word “ plastics” now a dirty word killing sea turtles.

In industry, I do see viable products being produced when the market need is properly identified. I see long distance trucking using A.I. but not I the short haul portion where too many variables impact the situation and human decision making and experience comes into play.

Where I run into AI is in the manufacturing environment where a CAD model can be automatically programmed to generate machine tool code so the part can be machined and then inspected by machines for accuracy of part to CAD Nominal. This has happened but took 40 years.

 

jack action

jack action

I see AI mostly as a powerful extension of statistics. The area where AI sounds really exciting is in the research for new molecules, medical treatments or the like. Based on past experiments, AI will quickly find patterns and determine the new experimentation that are the most promising and that should be explored first. This is basically how statistics is used in science, to make observations, identify patterns and guide us to our next steps. AI should be quicker and take less resources.

As for the use of AI in the everyday day society, it scares me a little bit because I’m afraid that it will also be used like statistics. Because just like statistics, AI will always give an answer, even if the premises make no sense. So people will try to analyze very complex systems by making a lot of assumptions and introducing subjectivity, which will give disputable results. But, because it will be done by AI, too many will follow blindly. That will be problematic if the findings are considered as “MUST BE” instead of “COULD BE”, and set into rules – or worst, laws – that dictate our lives.

Furthermore, trusting AI to make decisions in simple, well defined, systems will be OK (like on a production line). But if we begin to trust AI without human supervision, we will have to completely reexamine our views on responsibility and liability towards one another.

 

Dr Transport

Dr Transport

I have mixed feelings on the subject of AI. I see it as being good in some aspects, STEM education where teachers can utilize it to tailor teaching methods to every student individually which could help them learn more effectively. I’ve seen where marketing has used it to target sales (look at Amazon or Google, you search for one product and you start getting ads and emails based off of that search for other products). Is it good or bad?? I don’t know, you get too see other things out there that you weren’t aware of that might be more helpful.

On the flip side, I’ve seen it in the research industry using AI or Machine Learning as a buzzword to get funding for marginal research/researchers. (One of my former bosses required that any computer purchase have GPU associated hardware so that you could claim to use it for machine learning and he also required us to have a paragraph in every proposal detailing how we could use AI/Machine Learning for this project, he was a buzzword chaser and we all saw thru it.)

Now, I don’t claim to be an AI/Machine Learning expert, I’ve read 10-20 papers on it, some of which were well regarded. I’ll be honest, I couldn’t get through them, they were not well written, or at least written so you could re-implement their ideas to either check or validate their claims. What I did see was a whole lot of statistics applied in a half hearted way to prove a point which was dubious to begin with. I believe it was Mark Twain who said “Lies, Da**ed Lies and Statistics….”.

 

anorlunda

anorlunda

In conventional education, I think AI will come very slowly. But there is one area that I heard about here that sounds exciting: AI personal tutors. Imagine a tutor AI that remembers every answer to every question the student ever gave. It analyzes video of the student during class to see if the student is paying attention, if the student shows signs of understanding or confusion. The AI may theorize on what wrong ideas the student has in his/her head.

The AI estimates what concepts are mastered, missed, or misunderstood. The AI observes and analyzes the student doing homework. The AI chooses corrective lessons (written or video, or practice problems) targeted at the student’s weak points. It could replay snips from the student’s human teacher discussing the points misunderstood, or from another teacher that the student seems more receptive to. The AI can throttle the student’s workload depending on whether the student is having a good day or a bad one.

The AI strives to give the student 100% comprehension on lesson N before lesson N+1 comes along. That is because N+1 is more difficult if less than `100% of N is understood. The AI reports back to the human teacher and parents data about the student.

The hope is that that students with a personal AI tutor will outperform students who don’t. Hopefully, the incremental cost for a tutor per would be low so that it could be affordable or free open-source. Also hopefully, it won’t fall victim to exploitative commercialism, malware, politics, or Newspeak. All technology can be used for good or evil.

 

STEMucator

STEMucator

Here are some thoughts about the rise of AI affecting STEM, and other aspects of everyday life:

– Automatic cashiers. You probably remember using a self service cash register during some time of your life. Soon we will simply put our products on a conveyor belt, and a machine will handle the remainder of the work.

– Automatic lab experimentation. We will have machines that automatically execute lab procedures (e.g. for chemical lab environments, machines will control the quantities, and chemical mixing).

– AI instructors. Soon classroom environments will change. No more professors. Instead, something such as a more intelligent Siri, or Cortana will teach people. Maybe classrooms will not exist anymore, and everyone will learn online.

– Automatic assembly line workers. No more factory workers. Machines that can automatically perform factory tasks will replace people.

– Self driving vehicles including cars, and planes. These exist now, but not fully autonomous (e.g. autopilot for planes, NVIDIA end-end deep convolutional neural network for self driving cars, etc). Soon, you won’t see a driver at the front of your taxi.

 

BillTre

BillTre

First what is meant by AI (the rise of)?

  • Current state of things or what might be happening in a while (Sci-Fi like).
  • Does AI mean smart or functioning in some particular technical way?

Anyway, its still going to be a combination of programming and hardware.
It is going to go through different functional expansions with respect to people:

  • filling a functional void in some way (being useful)
  • expanding into functional areas, which will eventually compete for jobs with people (causing friction)

I am not really that current on what’s going on in AI, but I can consider what kind of artificial smarts (more so than current?) might be useful in the future.
Since I don’t know that much about what is going on in AI, I guess I’ll be presenting a kind wish-list of somewhat ideal goals to be achieved in a not too distant in the future, consequences that might happen.

Lab:
Increasingly sophisticated assistance in making, producing, publishing research.
Finding new relationships in complex databases of information.
Taking care of the repetitive tasks that a computer can run without human interference.
Deciding when it is appropriate to do those tasks. (initiate and plan experiments)
Increasingly sophisticated literature and data searches.
Identify hitherto unknown relationships between different datasets. (making a discovery)
Eventually replacing graduate student level people, thus reducing the need to train so many excess PhD’s.
Not sure of the effect on the PhD market, PhD’s getting replaced or equivalent intellectual level AI units cost too much.

Classroom:
Increasingly deep knowledge of subject area. (able to discuss aspects of large scale biology on a molecular scale, or from the view of thermodynamics).
Increasingly involved teaching assistances, gradually becoming more autonomous.
Would require a smoother functioning and more reactive (about to quickly take and respond to questions) in order to replace people.
Some may think that a more human looking and reacting robot thing would be more appealing. I don’t think it matters that much if you get the interaction-al personality right.

Industry:
Like lab, but more production oriented.
Should increase efficiency and flexibility of production, making production more user specific.
Just-in-time to the max.
Replacement of jobs changes how industry relates to local government (jobs for business breaks is a common thing in politics and is based on the production of jobs, which generally means happy voters).
Without the jobs, the political equation becomes unbalanced.
The business breaks (usually tax cuts) will be harder to support.

Everyday Society:
As AI replaces normal people out of jobs at which they functioned well at, people will get pissed of unless it is done by attrition.
Even then, the pool of available jobs of the kind the AI are taking will be shrinking leading to a tightening of competition for new jobs for each year’s set of new workers.
Perhaps there will be an equal or greater number of new jobs being created by the new technology. I am sure that there will be new jobs, maybe better paying, but probably not as plentiful.
Things become a matter of cost. Does it make more sense to make a super competent AI machine if its a better deal than employing a equivalent biological person. (Sci-Fi plot line here)

Political push-back would want to:

  • Limit the jobs AI can replace people in (rule some jobs out)
  • or Not have AI replacement happen faster than job attrition.
  • get some benefit for the population (taxes? training?).

AI is flexible, it can do any of these things.
It can wait for its opportunity (evolutionary speaking)!
Mwa-Haa-Ha!!! :oldsurprised::eek::bow:

 

bhobba

bhobba

Artificial Intelligence (AI) is not only making a big difference today, it is set to revolutionize the future. Just imagine one application, driver-less cars, that has been solved but now needs to be perfected and tested as totally safe – at least safer than human drivers. It will be extremely disruptive. Truck drivers – gone. Paying for parking – gone. Cab, Uber drivers etc – gone. Driving licences – gone. Being stressed while driving home from work – much less – you can simply rest and relax – read a book, watch TV, surf the net – all sorts of things. Traffic police – gone. Income from driving offences – gone. I suspect you can think of quite a few others – disruptive might even be a mild term – revolutionary may be better.

Ok in simple terms – just what is this technology:

I am certainly no expert on this, but one area has sparked my interest. I got an 8k TV and wondered how it’s up-scaling technology works. The first generation used machine learning (weak AI in the above article):
https://www.techradar.com/au/news/heres-the-secret-behind-8k-ai-upscaling-technology

That’s the TV I got. But this is accelerating fast, and this years generation is better again – now using using deep-learning with even better results and a new idea called Samsung AI Down-scaling:

Watch from 27 minutes.

Other companies are also working on their own approaches eg
https://www.isize.co/
https://www.isize.co/escaping-the-complexity-bitrate-quality-barriers-of-video-encoders/

We also have new codecs appearing to transmit video such as AV1. Trouble is the encoding time is horrendous and people are working on ways to reduce it. One company is using AI to help solve it:

For some papers with more technical depth see:
https://cv.snu.ac.kr/research/taid/
https://www.isize.co/wp-content/uploads/2019/09/IBC2019_iSIZE_v81.pdf

This stuff is just in its infancy. Eventually it will be built into the encoding and decoding chips so that at the bit rates we actually have access to via the internet, 8k television will become a reality. Whether you can see a difference at normal viewing distances and screen sizes over 4k is another matter:
https://www.techhive.com/article/3529913/8k-vs-4k-tvs-most-consumers-cannot-tell-the-difference.html

But I must mention other tests have shown something very interesting. 8k down scaled to 4k looks obviously better than straight 4k. Why that may be I leave the reader to investigate (it may have something to do with noise).

I must say I love my 65 inch 8k TV and many people have commented how good the picture is. But with each new model, you haven’t seen anything yet.

 

Andy Resnick

Andy Resnick

That’s an interesting question, not just because the term ‘A.I.’ includes a whole range of technologies. There has already been significant and most likely permanent changes in industry and everyday society due to ‘knowledge based systems’ and ‘big data analysis’.

A lot of what I see in the lab is restricted to something like image analysis: not just clinical applications (radiology charts and disease diagnosis) but also in terms of finding patterns in (image, genomic, proteomic,…) data. And, while there are reports of using A.I. to help design experiments, aside from one high-profile splashy paper in 2009, I have yet to read about an expert system conceiving of an experiment.

In the classroom, I have experience with A.I.-lite systems in place for a lot of the pre-calc and calculus sequence in math (“Mastery learning”) and I expect that similar approaches could be used in vocational settings. However, I have not seen any evidence that an A.I. system could be used to wholly replace an instructor.

As a summary comment, my view of A.I. may be retrograde, but I draw a parallel between how A.I. will grow to influence our social/professional life similar to the way phones have grown to permeate our social/professional life. For example, I note that in the past, phones were designed to interface with people: the shape of the headset conformed to our face. Now, people have to conform to the phone: smartphones are flat, quite unlike our faces. One result of this is that we hold smartphones quite differently than ye olde handset. Another: if your fingers are too large for the virtual keypad, sucks for you.

That is to say, humans will adapt to A.I. interfaces as presented to us, as opposed to adapting A.I. to fit our needs. I agree that this is quite a pessimistic and passive view of humanity.

 

neilparker62

neilparker62

The only ‘real’ encounter I have had with AI to date was the Alpha Zero chess playing ‘terminator’ which dispatched the (previously) current world computer chess champ, Stockfish with ease. I had a look through some of Alpha Zero’s games – fascinating!

I suppose in the classroom, one should have Google ‘Alexia’ easily available for students to ask questions. As a Maths teacher I encourage students to make use of ‘low level’ AI systems such as online graphing utilities and -of course – Wolfram Alpha. I’m not sure what else – I guess the worry is that an AI ‘teacher’ might eventually make us all redundant!

 

jfizzix

jfizzix

I am disquieted by the thought that AI-assisted deception (e.g., deep fakes) may reach the point that the majority of people will be capable of using it while also remaining completely susceptible to it. Misinformation campaigns will become a fine art, and with political trust at a low level to begin with, it is critical that we develop adequate defenses against these measures (possibly also with AI) lest complete cynicism, paranoia, and apathy destroy the fabric of society.

Comment Thread

Read part 2!

1 reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply