Can AI Redefine the Meaning of Work and Improve Quality of Life?

  • Thread starter EngWiPy
  • Start date
  • Tags
    Ai
In summary, the conversation discussed the potential impact of AI on humanity. While some believe that AI can save humanity by taking over jobs and allowing humans to focus on more important things, others argue that this could lead to higher unemployment rates and create a need for new jobs in areas such as "love" and "compassion." There are also concerns about the quality of these jobs and the potential for companies to use AI to generate more revenue while laying off employees. There is also debate about whether AI will actually lead to job losses or simply change the nature of jobs. Some argue that we are not close to achieving "strong AI" and that current research is focused on "intelligence augmentation," where machines enhance human intelligence rather than replacing it completely. There are
  • #1
EngWiPy
1,368
61
The other day I was intrigued by the title of a TED talk called: How AI Can Save our Humanity. But it wasn't what I expected. I expected to hear how AI can help us detect diseases, for example, but I was surprised to hear that AI can save humanity by taking our jobs, and thus let us focus on what is important to us as humans.

Of course, this implies higher unemployment rate, but the speaker suggested that we can create more "love" and "compassion" jobs! I think a more realistic model to focus on what is important to us humans, would be reducing the working hours because of the help of AI, while maintaining a good quality of life for most people. The question is: will this be a viable model?

For me, I can see that greed will dominate the scene, where big companies will use AI to generate more revenue, while laying off more employees, and the only people who will keep their jobs are the CEOs along with few helpers. But creating "love" and "compassion" jobs is not a solution, because you don't reduce the work stress this way, you just transfer it to other occupations, that probably would pay much less with lower working conditions, which means the same (or more) stress with lower quality of life.

What do you think will happen in the era of AI?
 
Physics news on Phys.org
  • #2
This is why some folks are floating the Universal Basic Income idea where everyone gets paid a monthly amount sufficient to survive without working. Its like welfare for everyone. There are clear downsides to it:

https://en.wikipedia.org/wiki/Basic_income

It's not hard to imagine that folks will be treated poorly in such a system. In some ways, it reminds me of the current employee crisis at Disney where the theme park employees aren't getting enough to even rent housing and so wind up loving their job but living in a car.

 
  • #3
EngWiPy said:
The other day I was intrigued by the title of a TED talk called: How AI Can Save our Humanity. But it wasn't what I expected. I expected to hear how AI can help us detect diseases, for example, but I was surprised to hear that AI can save humanity by taking our jobs, and thus let us focus on what is important to us as humans.

Of course, this implies higher unemployment rate, but the speaker suggested that we can create more "love" and "compassion" jobs!
The argument that technology will create unemployment has been around as long as technology and has never been true. It isn't inconceivable that it will someday become true, but it isn't on any time horizon we can foresee. "AI" doesn't even necessarily create such a situation, nor is "AI" necessarily the only way such a situation could occur.

I don't know that the new jobs that will be enabled will be "love and compassion" jobs but in the US anyway, we're already moving toward a service based industry more than a product based industry.
 
  • #4
I think invariably there will be fewer jobs for humans. Of course on the bright(?) side more people are retiring (10,000 per day ) while the population growth is decreasing reducing job competition. But the jobs left over or newly created will not be suitable for many of those who were displaced by machines. Not everybody is a "people person" . And there are also those who cannot or will not adapt . The wages for the easiest/common jobs.will kept low because of the competition for them.

There has been some talk of robot taxes on companies who displace workers with automation. Look at the Stop Bezos tax proposed by B. Sanders because many of Amazons employees are on food stamps. There are going to be new problems political, economic and cultural. Let's hope we can find rational equitable solutions.
 
  • #5
I'm not sure if people are even anywhere close to having developed a so called AI. Are self-improving self-optimising algorithms to be regarded as AI?
 
  • #6
EngWiPy said:
I was surprised to hear that AI can save humanity by taking our jobs, and thus let us focus on what is important to us as humans.
I think one such important human thing is being able to feel your own worth. With most people without even a real job what I expect is a boom for happy pills.
 
  • Like
Likes stefan r, phinds and russ_watters
  • #7
What bothers me about the speculation of AI taking jobs from humans is that the people making such speculations are making the assumption that the said jobs are somehow static in the sense that the roles, responsibilities and tasks of these jobs are stable and unchanging.

In my own humble opinion, that is frankly a dubious assumption to make. Whenever we have seen major technological advances in the past, what we have instead witnessed was that the very nature of the particular job(s) changed in terms of its scopes and responsibilities, so the job(s) essentially take on a different role. Yes, this could involve some job losses, but in its stead the scope of work will change, leading to potentially a greater demand for people with new skillsets.

I should also add that we need to be careful when we are talking about AI. "Strong AI" has at times been characterized as the research in the development of machines that have the capability for consciousness, self-awareness and the ability to act autonomously and independently of humans. By that definition, we are nowhere even close to that stage, and I am skeptical that we will achieve this in the near future.

What I see instead in much of the research being conducted, and where I see the most advances, is in what I would consider "intelligence augmentation" (a phrase that I believe was invented by Berkeley computer scientist/statistician Michael I. Jordan), where computing machines (through deep learning or through other methods) essentially augment the intelligent activities of humans, but which humans remain the key drivers. In this scenario, the idea of "replacing" humans do not make much sense, since human input will remain key in any major decision making. The machines will serve as an aide in helping us make better decisions.
 
Last edited:
  • Like
Likes Choppy and russ_watters
  • #8
StatGuy2000 said:
What bothers me about the speculation of AI taking jobs from humans is that the people making such speculations are making the assumption that the said jobs are somehow static in the sense that the roles, responsibilities and tasks of these jobs are stable and unchanging.

In my own humble opinion, that is frankly a dubious assumption to make. Whenever we have seen major technological advances in the past, what we have instead witnessed was that the very nature of the particular job(s) changed in terms of its scopes and responsibilities, so the job(s) essentially take on a different role. Yes, this could involve some job losses, but in its stead the scope of work will change, leading to potentially a greater demand for people with new skillsets.
Agreed. The fact that except in times of extreme economic downturn (which are short-lived), upwards of 96% of people who want jobs get jobs (plus or minus about 2%), despite massive technology shifts and and demographic shifts, tells me that a market economy creates jobs out of thin air for people who want them (and when people leave the workforce, those jobs evaporate back into thin air).

The downside to this is that the new jobs created are created by normal supply and demand economics (increased supply), which pushes wages down.

The potential "new" danger from "AI" or more broadly improved smart automation is that traditionally the jobs lost to automation have been low skill/income jobs, so there hasn't been much societal downside to the shift. As automation gets smarter, the lost jobs will be higher skill and that makes it tougher to move up instead of down when seeking a replacement job. But:
...computing machines (through deep learning or through other methods) essentially augment the intelligent activities of humans, but which humans remain the key drivers. In this scenario, the idea of "replacing" humans do not make much sense, since human input will remain key in any major decision making. The machines will serve as an aide in helping us make better decisions.
Did you see "Hidden Figures"? In the movie, the "computers" were the people who essentially acted as human spreadsheets. They were highly intelligent mathematicians. They were replaced by an IBM mainframe, but the department head for the black "computers" saw it coming and taught her department to be programmers. The total number of employees involved in computing decreased, and those who gained the new skill stayed employed and in most cases got better jobs.

This was 57 years ago. It's still true today, and while it hasn't seemed to cause a problem yet, it may as it gets more prevalent.
I should also add that we need to be careful when we are talking about AI. "Strong AI" has at times been characterized as the research in the development of machines that have the capability for consciousness, self-awareness and the ability to act autonomously and independently of humans. By that definition, we are nowhere even close to that stage, and I am skeptical that we will achieve this in the near future. What I see instead in much of the research being conducted, and where I see the most advances, is in what I would consider "intelligence augmentation"...
Agree with all. I think there's a lot of mixing and matching, causing hype and fear mismatches in discussions; fearing something that isn't realistic while describing something that has already happened (and will just continue to expand).
 
Last edited:
  • #9
russ_watters said:
The argument that technology will create unemployment has been around as long as technology and has never been true. It isn't inconceivable that it will someday become true, but it isn't on any time horizon we can foresee.

StatGuy2000 said:
By that definition, we are nowhere even close to that stage, and I am skeptical that we will achieve this in the near future.

What is meant by the near future or the foreseeable future. For some it is tomorrow for others perhaps as much as five or ten years. Clearly we will not wake up tomorrow or next month to find noticeable impacts of AI on particular jobs but in five years? How long will it take for people to adjust or will they be even able to. Who will help them the companies, government? We already see that companies want employees that can "hit the ground running". Will they change? Can we afford government sponsored retraining programs? It has been estimated that worker may have as many as seven or eight careers in their lifetime. That represents a lot of reeducation time.(unemployment).
 
  • #10
russ_watters said:
Agreed. The fact that except in times of extreme economic downturn (which are short-lived), upwards of 96% of people who want jobs get jobs (plus or minus about 2%), despite massive technology shifts and and demographic shifts, tells me that a market economy creates jobs out of thin air for people who want them (and when people leave the workforce, those jobs evaporate back into thin air).

I would dispute your claims that (a) 96% of people who want jobs get jobs, and (b) that a market economy creates jobs "out of thin air" in a figurative sense, but that debate has really nothing to do with AI or even necessarily technological developments in general, and has been discussed extensively in other threads, so I digress.

The potential "new" danger from "AI" or more broadly improved smart automation is that traditionally the jobs lost to automation have been low skill/income jobs, so there hasn't been much societal downside to the shift. As automation gets smarter, the lost jobs will be higher skill and that makes it tougher to move up instead of down when seeking a replacement job.

Did you see "Hidden Figures"? In the movie, the "computers" were the people who essentially acted as human spreadsheets. They were highly intelligent mathematicians. They were replaced by an IBM mainframe, but the department head for the black "computers" saw it coming and taught her department to be programmers. The total number of employees involved in computing decreased, and those who gained the new skill stayed employed and in most cases got better jobs.

This was 57 years ago. It's still true today, and while it hasn't seemed to cause a problem yet, it may as it gets more prevalent.

It is true that the potential lost jobs will be of "higher skill" per se. I put "higher skill" in quotes because in effect much of the automation that we will be seeing with AI will nonetheless be highly repetitive tasks that are the part and parcel of high-skilled positions (in essence, low-skill tasks that burden those in higher skill positions). I also emphasize the word "potential" because another likelihood is that the less time an individual worker focuses on the low-skill tasks that are automated, the more likely the said worker can focus on the higher-skill aspects of said job, without necessarily leading to job losses (or if there are losses, those would end up being minimal or not due to technology). The impact will likely vary sector by sector and company by company.

The scenario you describe in the film "Hidden Figures" (which I'm embarrassed to say I have not seen yet -- on my TDL) is a good example of what we are both talking about.

Agree with all. I think there's a lot of mixing and matching, causing hype and fear mismatches in discussions; fearing something that isn't realistic while describing something that has already happened (and will just continue to expand).

We seem to be on a roll in agreement, aren't we? :biggrin:
 
  • #11
gleem said:
What is meant by the near future or the foreseeable future. For some it is tomorrow for others perhaps as much as five or ten years. Clearly we will not wake up tomorrow or next month to find noticeable impacts of AI on particular jobs but in five years?
I'd say 20 years for near future and 50 for forseeable future. Those might be a bit long, so I'd be willing to negotiate them down somewhat... Anyway:

I would be utterly shocked if there were a truly game-changing breakthrough in the next 20 years and still somewhat surprised if there is one in the next 50. You can use that sentence to describe my views on fusion power too...
How long will it take for people to adjust or will they be even able to. Who will help them the companies, government?
To what; the continued incremental shifts or the game changer? Who; individuals or society?

I guess the short answer to the ongoing shifts is the economy self adjusts fast enough it has no noticable impact on employment levels. And to me the point of the "game changer" is that it is very difficult to predict, much less adjust to. So I don't know how we would/could react. Perhaps if someone painted a scenario I could comment on it...
It has been estimated that worker may have as many as seven or eight careers in their lifetime. That represents a lot of reeducation time.(unemployment).
You sure that's "careers" and not "jobs"?
 
  • #12
russ_watters said:
To what; the continued incremental shifts or the game changer? Who; individuals or society?

The individual displaced worker. Not everybody is flexible enough to reinvent themselves repeatedly during their lifetime.

russ_watters said:
You sure that's "careers" and not "jobs"?

It may be a bit of both since the new jobs will probably involve some application of AI to different problems requiring coming up to speed in different areas of application. It may be a variant of a gig work style.

A report by Mckinsey & Co
.https://www.mckinsey.com/~/media/McKinsey/Featured%20Insights/Artificial%20Intelligence/Notes%20from%20the%20frontier%20Modeling%20the%20impact%20of%20AI%20on%20the%20world%20economy/MGI-Notes-from-the-frontier-Modeling-the-impact-of-AI-on-the-world-economy.ashx

examines the impact of AI (by simulation) on the world economy into 2030.. They note that AI is not an infant technology and is widely being implemented today and that we have yet to experience it effect. They believe it will not be noticeable until about 2023 at which time it will accelerate and by 2030 contribute a net increase of more than $9 T to the world economy. So in less than 12 yrs we have got have a solution to any social disruption. They also believe that AI will create more jobs that it replaces but it is based on historic precedents of technology. It is not clear to me if the appropriate workforce will be available to fill these new jobs. The report raises the issue of a negative reaction of the population to AI implementation which could be a significant problem in AI implementations in a democratic society. My thought is that this gives an advantage to more authoritarian countries who can dictate whatever is necessary to become a leader in AI and gain leadership in the predicted economic advantages. China has authorized a $22 B national AI program and is currently ranked right behind the US in AI.

To the OP.. Putin has said ... that whoever leads in AI will rule the world. In an email to DoD employees about the establishment of the Joint Artificial Intelligence Center (JAIC) Deputy Sec. of Defense, Patrick Shanahan stated “Plenty of people talk about the threat from AI; we want to be the threat.” So much for saving our humanity

.
 
  • #13
The person who gave the talk is a machine learning expert, and expected that AI will take most of our jobs. Listen here. That is why he suggested to create jobs that focus on "love" and "compassion", because algorithms cannot feel and have no emotions. I argued that this is not a viable solution. We already have these "love" jobs in the mental health care, and it is not working. Paying money for someone to listen to you, while in actuality he/she doesn't care about you beyond what you pay for. Humans need less stress to give time to human values and take time for their families in means other than the money, and AI can facilitate this by reducing the working hours, without replacing them. But could this happen? I doubt it, because of greed.

Nowadays algorithms can perform better than humans even as doctors (at least in diagnosing diseases) and lawyers. What is there left to do that has value? The capital will be concentrated in big companies like Google, and Facebook. The capital is already concentrated and AI will increase the gap. There should be a new economic model for all to have a decent life. I think the comparison with the past is not a measure of the future. Nowadays machine are more powerful, and can do complex tasks.
 
Last edited:
  • #14
I like the idea of Jaron Lanier of monetizing the data we share. All ML algorithms feed on our free data constantly, while the big company use/sell these data to make big money. Also, this would give a more clear definition of privacy in the digital age by giving users control over their data. This needs a political decision.
 
  • #15
EngWiPy said:
The person who gave the talk is a machine learning expert, and expected that AI will take most of our jobs. Listen here. That is why he suggested to create jobs that focus on "love" and "compassion", because algorithms cannot feel and have no emotions. I argued that this is not a viable solution. We already have these "love" jobs in the mental health care, and it is not working. Paying money for someone to listen to you, while in actuality he/she doesn't care about you beyond what you pay for. Humans need less stress to give time to human values and take time for their families in means other than the money, and AI can facilitate this by reducing the working hours, without replacing them. But could this happen? I doubt it, because of greed.

Nowadays algorithms can perform better than humans even as doctors (at least in diagnosing diseases) and lawyers. What is there left to do that has value? The capital will be concentrated in big companies like Google, and Facebook. The capital is already concentrated and AI will increase the gap. There should be a new economic model for all to have a decent life. I think the comparison with the past is not a measure of the future. Nowadays machine are more powerful, and can do complex tasks.

While I agree that in the current economic model in many Western countries (in particular the US) the concentration of capital has exasperated inequality, that is tangential to whether AI will replace humanity in the employment realm.

The assumption that both you and the speaker gave is again based on the fundamental notion that the goal of the machine learning algorithms is to replace the activities of humans, rather than what I spoke about earlier about augmenting human capabilities. My own feeling, based on the evidence I've seen thus far, is that machine learning algorithms have been oversold in terms of its capabilities.

As far as algorithms performing better than humans -- yes, that is true, but only in very narrow settings or expertise, in circumstances that isn't necessarily realistic. What humans have the advantage is our capability to more or less seamlessly transition from one area of expertise into another area of expertise, and to learn tasks beyond specific realms. Even the most powerful AI algorithms in existence struggle to achieve this level of general intelligence, and progress has been frustratingly slow on this regard. And in an employment setting, this ability to move from one expertise to another is critical.
 
Last edited:
  • #16
When AI replaces high-wage high-skill jobs, you will be left with low-wage low-skill jobs. Inequality will be exacerbated by AI.

The development of AI moves very fast because many big companies have large investments. People are predicting what will happen in 20-30 years down the road, and prepare to that. It is estimated that 40-47% of our current jobs will be replaced by machines in the next 20-30 years.

You don't have to have a general intelligence to replace jobs. You can have many specialized algorithms, trained on specialized data for different fields. I don't think of AI as a fully conscious robot. This image is influenced by Hollywood.

Many countries, including Canada (Ontario) and Finland have started to experiment with the basic income model, one motivation for which is the "jobless" future because of AI. But I also like the idea that AI needs data, and this data is supplied by the users. Thus commercializing data is another way to go. Whichever, there needs to be some adjustments to the reality.
 
  • #17
EngWiPy said:
... AI can facilitate this by reducing the working hours, without replacing them. But could this happen? I doubt it, because of greed...
Greedy people act out of self interest. It is in their self interest to not get shot or beaten by angry mobs. There is also the threat of AI. It is in wealthy/powerful peoples interest to make it impossible for AI to seize wealth or power.

EngWiPy said:
... What is there left to do that has value? ...
Provide security.

EngWiPy said:
...the speaker suggested that we can create more "love" and "compassion" jobs! ...
Guess I will be at the bottom of the pyramid. I suck at compassion. :(

StatGuy2000 said:
...

I should also add that we need to be careful when we are talking about AI. "Strong AI" has at times been characterized as the research in the development of machines that have the capability for consciousness, self-awareness and the ability to act autonomously and independently of humans...

If AI is "strong enough" they should be able to do the love and compassion grunt work. People need cardiovascular exercise in order to remain healthy. Doing lots of material handling saves the energy that would be consumed by a robot and also saves the energy and resources that would be spent creating exercise equipment.

What constitutes a "good job" is largely perception. If the AI is "strong enough" it will be able to figure out what you need to hear. You will regularly hear how great you are working but not quite so often that it causes a lack of trust. Periodically the AI can send a manager human on a jog so that (s)he can drop off fertilizer or a hand tool and tell you in person how great your section of the garden looks.
 
  • #18
stefan r said:
... It is in wealthy/powerful peoples interest to make it impossible for AI to seize wealth or power.
...

What do you mean by "AI seize wealth or power"? You talk about AI as if it is/will be an independent entity capable of running itself. If you mean it is not of wealthy and powerful people's interest to make AI increase their power and wealth, then that needs more explanation.
 
  • #19
EngWiPy said:
When AI replaces high-wage high-skill jobs, you will be left with low-wage low-skill jobs.
This requires the assumption that the rate of elimination of high skill/wage jobs will be substantially greater than their creation. This assumption is not necessarily true moving forward and per my example of "Hidden Figures" appears to in fact be totally wrong looking backwards.
Inequality will be exacerbated by AI.
I've speculated that it might be as well, but I want to emphasize that my level of concern over this issue is really low. When people are lining up around the block to apply for minimum wage jobs as WalMart greeters, that tells me the problem is with the supply, not the demand. Yes, today's demands are more intellectual than physical - the opposite of 50 years ago - but having quality brains is supposed to be what makes humans unique, and it should not be blasphemous to suggest humans should be expected to use them.

To put a finer point on it, I'm not worried about the jobs of WalMart greeters because there is no realistic time horizon for when it would be economically beneficial to buy a million dollar robot to replace an $8 an hour waving hand.

The high cost means that AI will necessarily start by replacing high pay jobs, but again, this is something that is already happening and has not presented a problem. It's hard to conceptualize a scenario where that could change. Any specific ideas?
The development of AI moves very fast because many big companies have large investments. People are predicting what will happen in 20-30 years down the road, and prepare to that. It is estimated that 40-47% of our current jobs will be replaced by machines in the next 20-30 years.
And you assume this is a problem?

In another thread, someone talked about internet speeds being slower when he was a kid. I considered replying, but recognized I may as well have been speaking of unfathomable ancient history. I'm 42 and 25 years ago I was in high school, the internet as we know it didn't exist and modems didn't have a "high speed". 30 years ago, consumer level networking just plain didn't exist. Point being, the 40-47% prediction is utterly meaningless unless there is another number to compare it to: what is the equivalent stat for 20-30 years ago?

The iPhone was released in 2007; just over 11 years ago.
You don't have to have a general intelligence to replace jobs. You can have many specialized algorithms, trained on specialized data for different fields. I don't think of AI as a fully conscious robot. This image is influenced by Hollywood.
Agreed. That's why I've used the example of the utterly cataclysmic spreadsheet. Millions of workers have been rendered obsolete by this decades-old invention -- and if you want, you can go back 50 years to when computers first became a "thing'. They weren't as graphical, but spreadsheet-type calculations were some of the first uses of computers. It's ancient history and therefore does not concern me.
Many countries, including Canada (Ontario) and Finland have started to experiment with the basic income model, one motivation for which is the "jobless" future because of AI. But I also like the idea that AI needs data, and this data is supplied by the users. Thus commercializing data is another way to go. Whichever, there needs to be some adjustments to the reality.
"Basic Income" is a disastrous fantasy driven by not recognizing why communism failed. It's fine to experiment on it in order to disprove it (think: aether), but I'm glad people are doing so in ways that won't waste my money (in countries that aren't mine).

[edit]I feel like it was incongruous to say it is ok to research a thoroughly proven wrong idea, and implies too generous a characterization. I really just mean I don't care as long as it isn't my money that is being wasted. Welfare systems are thoroughly proven to incentivize non-productivity, so the idea that giving people money for no reason will cause some undefined stimulus is very misguided.

There may be a future far afield where robots simply do everything we want and "basic income" makes sense, but such a future bears no resemblance to today's reality, so today's experiments offer no insight to how such systems could work.
 
Last edited:
  • #20
It is difficult to predict what the future will be in 20-30 years. So, you need to make predictions based on the recent development and make projections. As a high skill/wage jobs I can think of, is diagnosing diseases. Algorithms can do it better than experienced doctors. Algorithms can also do the job done by radiologists and anesthesiologists. Today I read an article how AI can affect HR drastically by doing most of the job. To name a few examples.

About basic income. I heard that California had a plan to experiment with the idea, but not sure it they implemented it. So, the US is not far from the idea, and strangely enough, people in the silicon valley are supporting it! Your taxes are already re-distributed through welfare and other programs, which as you mentioned, are not working. This is another reason why some economists support the universal basic income idea. I think we need to wait and see the result.

Apart from the effect of AI in the future, the basic income is hypothesized to solve many existing problems, the main of which is poverty. To go back to your point about WalMart greeters, I agree this indicates a lack of skills, but may also indicate an urgency to survive. Most of these people may have no time to develop the skills needed for other high-skill jobs. The basic income idea may solve this problem, and give the less fortunate people the time and the money to develop these skills, and make a larger impact on the economy in return.
 
  • #21
In an https://spectrum.ieee.org/view-from-the-valley/robotics/artificial-intelligence/intel-execs-address-the-ai-talent-shortage-ai-education-and-the-cool-factor see explosive demand for creative AI engineers with diverse skills needed for not only tech companies, health care, finance, retail, manufacturing. Singer says AI development is moving very fast , what was state of the art in 2016 is legacy in 2018. Rice says “I have a very young child, so I am of the assumption at this point that any career she has is going to have artificial intelligence implications.” These will be great jobs for those that are prepared.
 
  • #22
russ_watters said:
"Basic Income" is a disastrous fantasy driven by not recognizing why communism failed. It's fine to experiment on it in order to disprove it (think: aether), but I'm glad people are doing so in ways that won't waste my money (in countries that aren't mine).

[edit]I feel like it was incongruous to say it is ok to research a thoroughly proven wrong idea, and implies too generous a characterization. I really just mean I don't care as long as it isn't my money that is being wasted. Welfare systems are thoroughly proven to incentivize non-productivity, so the idea that giving people money for no reason will cause some undefined stimulus is very misguided.

There may be a future far afield where robots simply do everything we want and "basic income" makes sense, but such a future bears no resemblance to today's reality, so today's experiments offer no insight to how such systems could work.

Alert: this is an aside from the main discussion regarding AI:

@russ_watters, on what you basis are you claiming that welfare systems are "thoroughly proven to incentivize non-productivity"? Are you basing this on the claims, by, say, monetarist economists such as Milton Friedman (who I personally believe has had a malign influence on the field of economics)? What sources can you cite about this claim? Furthermore, when you talk about "welfare systems", there are different models of welfare systems around the world -- are you stating that all welfare systems are essentially incentivizing non-productivity?

I also question your notion that the basic income is a disastrous fantasy. Consider this scenario -- I have heard arguments that if we replace most current welfare programs available in the US with a guaranteed minimum annual income, that could in theory lead to a more efficient method to help low-income individuals. Further, experiments on basic income dating back to its introduction in Manitoba, Canada during the 1970s hardly suggests it was a "disaster". Read the following excerpt in the Wikipedia article:

https://en.wikipedia.org/wiki/Mincome#Results
 
Last edited:
  • #23
EngWiPy said:
What do you mean by "AI seize wealth or power"? You talk about AI as if it is/will be an independent entity capable of running itself. If you mean it is not of wealthy and powerful people's interest to make AI increase their power and wealth, then that needs more explanation.
What prevents AI from being able to run itself? The assumption is that an AI is able to make decisions and that those decisions are better than what a human being would make.

In November Americans have to decide who they want to represent them in congress. Assuming AI improves enough you could vote for a piece of hardware with software. Or you could vote for a human who has the good sense to plug in a piece of hardware with software. If the AI is good enough is there ever a reason to vote for an idiot/senator who says (s)he will not follow the recommendations of proven AI? Humans might be able to retain the status and the higher than average salary that comes with the title "senator". If, however, (s)he plugs in a program that fails to provide a living for most of a population it is unlikely that (s)he will remain in power for long.

Jobs like CEO, banker, financial consultant/broker, and lawyer can all disappear. There is no reason to ever work for an ape whose instincts fail in a competitive environment. If the AIs make better decisions about the use of capital there is no reason to allow apes to misuse capital.
 
  • #24
I feel that you are talking about robots taking over humanity, or the conflict between machines and humans. I am not sure if this would ever happen, but it is likely that large companies will use AI and ML to be more powerful and generate more revenue, while laying off more employees. It is mentioned that the unemployment rate now is lower than ever, but is it? How is unemployment defined? Is surviving from a low-wage job, and live under the poverty line considered employment? This will be exacerbated by the age of ML and AI, and will create an even bigger gap between classes, and more people will struggle just to survive, unless something is done about it.

Maybe there will be surge of demand on AI/ML engineers in different disciplines, but if you can create an algorithm that can do the job better than an experienced professional in a certain field, and faster, then how many AI/ML engineers you would really need to create and maintain these machines?

The other day I read someone saying that the customer service of some large companies is now powered by ML, and it is interactive. So, you no longer need human customer service unless necessary. Doctors (at least diagnostic) and lawyers can easily be replaced. I am not sure what kind of jobs there will be in the future that has value.

I have no problem that ML/AI can do better and faster than humans, but the question is: what will happen to the people who will be replaced by these machines?
 
  • #25
EngWiPy said:
...This will be exacerbated by the age of ML and AI, and will create an even bigger gap between classes, and more people will struggle just to survive, unless something is done about it...

There is no need for "a bigger gap between classes". Civilization/society created an upper class in order to have educated minds making decisions that are beneficial for society in both the short and long term. As soon as there is an AI that can make better decisions there is no reason to have that upper class.

We are not talking about jobs not getting done. The AI is still providing professional services. It could still be called "capitalism". Resources are allocated by the demand set by the markets. The difference is that we do not have to offer the AI a second mega yacht in order to convince him to get up and think about the job. The AI will just efficiently route consumer goods to people who demand those goods. The AI struggles for peoples survival so there is no need for anyone person to struggle. You can set up simulated scenarios if someone wants to struggle just for kicks. You could, for example, type a vow of poverty into your consumer profile.

We could have actors pretend to be a wealthy elite. We do not have to shut down the yachting industry. The AI is better at allocating capital resources so if yachting has social value the job will get done.

EngWiPy said:
...
... but the question is what will happen to the people who will be replaced by these machines?

I can think of a few jobs: Audience member, cheering section, tourist, consumer, citizen, parent, adult escort.

Are athletic events or concerts as much fun if there is no lively audience? In retrospect people can say that humans stopped commuting to a cubicle or warehouse and finally started doing "productive work" full time.

I keep clocking in 40 hours a week so it is hard to put in enough time working on the elections. I keep talking to people who have not yet registered to vote . There is only 5 week left and many people do not know who is running. It appears that large portion of the population does not have time to do anything important.
 

FAQ: Can AI Redefine the Meaning of Work and Improve Quality of Life?

How can AI help in solving global issues?

AI has the potential to assist in solving many global issues by providing intelligent solutions and insights. For example, AI can be used to analyze large amounts of data to identify patterns and trends, which can help in predicting natural disasters, understanding climate change, and improving healthcare. Additionally, AI can also be used to optimize resource allocation, improve transportation, and enhance communication.

Can AI truly save humanity?

While AI has the potential to greatly benefit humanity, it is not a cure-all solution. AI is a tool that can aid in solving certain problems, but it also has its limitations. It is important for humans to carefully consider the ethical implications of AI and ensure that it is developed and used responsibly.

Will AI take over human jobs and lead to unemployment?

There is a concern that AI will replace human jobs, but it is also creating new opportunities and roles. AI can automate routine and repetitive tasks, freeing up humans to focus on more creative and complex work. It is important for individuals to continuously develop new skills and adapt to the changing job market.

Can AI eliminate poverty and inequality?

AI can contribute to reducing poverty and inequality by improving access to education, healthcare, and resources. For instance, AI-powered education platforms can provide quality education to those in remote or underprivileged areas. However, it is crucial to address the root causes of poverty and inequality and use AI as a tool to support and supplement these efforts.

Is AI a threat to humanity?

AI can pose some threats if it is not developed and used responsibly. For example, if AI algorithms are biased or not properly tested, it can lead to discrimination and inequality. It is important for humans to closely monitor and regulate the development and use of AI to ensure that it does not pose any significant threats to humanity.

Similar threads

Back
Top