Is Q* the Key to Achieving AGI and Solving Math Problems?

  • Thread starter gleem
  • Start date
  • Tags
    Ai
In summary, Q* is a mathematical framework that has been proposed as a potential key to achieving artificial general intelligence (AGI) and solving complex math problems. It combines elements of quantum physics and computer science to create a highly versatile and powerful system. While its potential applications are still being explored and debated, some experts believe that Q* could potentially lead to major breakthroughs in the fields of AI and mathematics. However, others caution that further research and development is needed before its full capabilities can be realized.
  • #1
gleem
Science Advisor
Education Advisor
2,602
2,056
Open AI has announced the development of a new AI model called Q-star, written Q*. It has been developed to solve math problems and appears to be competent at the grade school level. The development of Q* also seems tied to Altman's removal as CEO.

Of course, there is a lot of hype around Q*. Nonetheless, it seems to be a significant step in the development of AGI. It combines generative AI with some logical reasoning capability. Supposedly it can "reason" its way through math problems. The best website that I found discussing Q* is

https://aitoolmall.com/news/how-to-use-q-star-ai-and-its-safety/

which include direction to access it or some of its capabilities

The choice of Q for this AI brings to mind the entity that plagued the Star Trek crew in the original series.
 
  • Like
Likes Greg Bernhardt and PeroK
Computer science news on Phys.org
  • #2
gleem said:
he development of Q* also seems tied to Altman's removal as CEO.
Have they considered replacing him with an AI? CeoGPT could make statements like
"We're going forward with our plans to implement 'Outside the box' third-generation innovation. At the end of the day, we must have a laser-like focus on forward-looking global strategic time-phases."

Who would know the difference? Why spend millions on a real CEO when an AI-generated one spouts the same gibberish for less?

I'd find it easier to take seriously if they did two things: cut back on the hype, and tell us what problem a given program is intended to solve. "Lookit! Lookit!" only goes so farl

In mean "Greater transparency and a more fact-based paradigm will enhance our stature amongst our corporate peers as we move towards monetization of this techno-neurological - I mean neuro-technical..."
 
  • Haha
  • Like
Likes Astronuc, Mark44, phinds and 1 other person
  • #3
Thanks @gleem! The future is small but more accurate LLMs trained on specific tasks.

It's clear to me that LLMs need some integration with Knowledge graphs (KG) for better accuracy. That is why I think ultimately Bard/Claude will have a leg up since Google has the world-class KG.
https://wordlift.io/blog/en/neuro-symbolic-ai/
 
  • #4
Vanadium 50 said:
I'd find it easier to take seriously if they did two things: cut back on the hype, and tell us what problem a given program is intended to solve. "Lookit! Lookit!" only goes so farl
Certainly cannot disagree with that statement. Most hype I think is generated by the press with some impetus coming from the researchers. I believe the researchers know a lot more than they will acknowledge. AI remains a works in progress but as we see it is getting incrementally better. And of course we can play with the released version to find out for ourselves.

Whether or not they can develop an AI system that fits in a 1300 cc space and uses 12 W of power like our brain may be a bigger challenge.
 
  • #5
Vanadium 50 said:
Have they considered replacing him with an AI? CeoGPT could make statements like
"We're going forward with our plans to implement 'Outside the box' third-generation innovation. At the end of the day, we must have a laser-like focus on forward-looking global strategic time-phases."

Who would know the difference? Why spend millions on a real CEO when an AI-generated one spouts the same gibberish for less?

I'd find it easier to take seriously if they did two things: cut back on the hype, and tell us what problem a given program is intended to solve. "Lookit! Lookit!" only goes so farl

In mean "Greater transparency and a more fact-based paradigm will enhance our stature amongst our corporate peers as we move towards monetization of this techno-neurological - I mean neuro-technical..."
Someone should let them know, if they can't replace the CEO with it, then it's not AGI yet.
 
  • #6
gleem said:
Open AI has announced the development of a new AI model called Q-star, written Q*. It has been developed to solve math problems and appears to be competent at the grade school level. The development of Q* also seems tied to Altman's removal as CEO.

Of course, there is a lot of hype around Q*. Nonetheless, it seems to be a significant step in the development of AGI. It combines generative AI with some logical reasoning capability. Supposedly it can "reason" its way through math problems. The best website that I found discussing Q* is

https://aitoolmall.com/news/how-to-use-q-star-ai-and-its-safety/

which include direction to access it or some of its capabilities

The choice of Q for this AI brings to mind the entity that plagued the Star Trek crew in the original series.
I thought q star and details are mostly rumor at this point.

There was also a rumor that the "dangerous breakthrough" involved an AI that learned how to break into certain encrypted systems, thought practically impossible, and it wasn't clear how it did it.

Either way, I wouldn't put much stock into AI rumors considering how many there are.
 
  • Like
Likes cyboman
  • #8
I don't think that it will take that long. Google just announced its multimodal Gemini which will have developer access this month. The capabilities demonstrated in the video are impressive. Some things of note at the end of the video - Google has some "interesting innovations" that they're working on for future versions of Gemini and there will be rapid advancements next year. They are also actively working to integrate it with robotics.



Edit: While it looks fascinating, I find it troubling that they had to resort to the editing tricks, selective displays of responses and outright fabrication in @Filip Larsen's Arstechnica link below. We'll see how powerful it really is when it comes out.
 
Last edited:
  • Like
Likes PeroK and Greg Bernhardt
  • #11
Here's the thing about all the AI hype imo. Put the server farm running it in a total sandbox. That is, no external access to anything, with a big red button kill switch just in case. OK, now ask it the biggest problems plaguing humankind. Poverty, climate change, the theory of everything, etc... When someone reports back a viable hypothesis worth listening to, to any of those questions, we can start to be worried/impressed. Right now it seems to be just a pattern recombination number cruncher that's next guessing what is a desired output. There is no significant grand emergence or singularity from what I understand. Put the guardrails for sure, but until there is proof in the pudding other than a faux glorified Turing test, ya way overblown. Currently, it can't even dependably drive a car.
 
  • Sad
Likes PeroK
  • #12
I think that's asking too much of AI. I think one gets a more accurate picture by thinking of "AI" as a collection of tasks that might be performed. No single task is "intelligent", much less "intelligence".

The ChatGPT experience shows us that the ability to answer free-form questions, sometimes even correctly, does not actually depend on domain knowledge, or really, any knowledge at all. Just as a pocket calculator showed that one can multiple 4-digit numbers without any intelligence at all.
 
  • Like
Likes cyboman, Borg and Greg Bernhardt
  • #13
Vanadium 50 said:
I think that's asking too much of AI. I think one gets a more accurate picture by thinking of "AI" as a collection of tasks that might be performed. No single task is "intelligent", much less "intelligence".

The ChatGPT experience shows us that the ability to answer free-form questions, sometimes even correctly, does not actually depend on domain knowledge, or really, any knowledge at all. Just as a pocket calculator showed that one can multiple 4-digit numbers without any intelligence at all.
I agree. My use case is on the high end of the AI performance scale. I acknowledge that there are lots of applications that will benefit from the fast evolving tech of AI. I guess my response is more of a reaction to much of the dramatic hype we see in media of having to "pull the plug" as AI chat bots develop their own language. And much of the other rumors of it being a terminator sci-fi scenario. I'm suspect of some of the investment and capital motives by these companies at the forefront of the tech to produce buzz.

Along the lines of your second point, it does seem interesting to pick apart what the industry is considering "intelligence" when in the business of simulating it. Lately as pointed out here, the ability to "reason" through basic math was being framed as a big breakthrough. And similarly perhaps one is compelled to ask their definition of reasoning.
 
Last edited:
  • #14
Vanadium 50 said:
I think one gets a more accurate picture by thinking of "AI" as a collection of tasks that might be performed. No single task is "intelligent", much less "intelligence".
That is one of the more accurate statements that I've seen. Too often, people criticize models for things that they weren't built to do.

I am currently working on building a GPT-like LLM pipeline from HuggingFace models. Building the pipeline involves multiple components that each affect the results in their own way. One of the more important aspects of these pipelines is prompt engineering which is used to programmatically develop and optimize prompts that are fed into language models. My initial pipeline is done so the prompt engineering guide is on my Christmas reading list. :woot:
 
  • #15
Borg said:
That is one of the more accurate statements that I've seen. Too often, people criticize models for things that they weren't built to do.

I am currently working on building a GPT-like LLM pipeline from HuggingFace models. Building the pipeline involves multiple components that each affect the results in their own way. One of the more important aspects of these pipelines is prompt engineering which is used to programmatically develop and optimize prompts that are fed into language models. My initial pipeline is done so the prompt engineering guide is on my Christmas reading list. :woot:
But why are we spending so much effort to produce these models when we can just ask someone like Leonard Susskind what he thinks? It's like making an omelette and celebrating it as novel when a soufflé already exists.
 
  • #16
The creation of a language model isn't necessarily about replicating an existing expert's knowledge or insights. Instead, it's about leveraging computational tools to process vast amounts of data and generate useful, contextually relevant information across various domains. These models serve to assist in understanding, generating, and organizing information in ways that human experts, even highly knowledgeable individuals like Leonard Susskind, might not be able to match in terms of scale, speed, or accessibility. Also, the Susskind soufflé might not have enough time to answer everyone's questions.
 
  • Like
Likes cyboman
  • #17
cyboman said:
But why are we spending so much effort to produce these models when we can just ask someone like Leonard Susskind what he thinks? It's like making an omelette and celebrating it as novel when a soufflé already exists.
Do you have access to Leonard Susskind on-demand? I'd encourage you to play around with at least the free LLMs like Bard, Claude, BLOOM, Llama 2, Bing Chat (GPT)
 
  • Like
Likes cyboman
  • #18
Greg Bernhardt said:
Do you have access to Leonard Susskind on-demand? I'd encourage you to play around with at least the free LLMs like Bard, Claude, BLOOM, Llama 2, Bing Chat (GPT)
Point taken. I have played around with Bing a bit and it's language model and ability to process mass amounts of internet data in meaningful ways is compelling and powerful. It seems more like a more intelligent and user friendly search engine, than the disruptive faux human intelligence that the companies seem to hype up and market it as.

Your response is succinct and I agree, the growth of AI is facilitating a very important role of making big data consumable and relevant to the human mind / researcher and perhaps shortening the ramp up time it takes for a specialist in any area to catch up with the decades of knowledge that has come before them.

I recall futurists saying that one of the limits we have to solve some of the big questions is that it takes almost a lifetime just to ramp up to what the giants have written so as to stand upon their shoulders. If the role of AI can augment the human capacity for knowledge consumption and processing and in effect shorten that ramp up time, that is a better sell as a tool, and more honest, than something enigmatic that threatens human intelligence itself.
 
Last edited:
  • Like
Likes BWV and Greg Bernhardt
  • #19
One of the very useful things that I've found is that when I'm trying to figure out a new coding technique, the process has sped up dramatically.

In the past, I would have to wade through various software libraries and try to learn what they were capable of and whether it fit my needs. This was a long and arduous process where I would go down fruitless paths. Often, the advertised capabilities didn't match up to what they claimed, the software was out of date, no decent code examples or documentation, etc. With ChatGPT, I can tell it what I want to do and it responds with code that is extremely good. If I see a library that I'm not familiar with, I can ask direct questions about it and get answers that are specific to my exact question - no more trying to find an article on StackOverflow that looks like it's close only to find that it isn't relevant. This capability alone has saved me weeks of research time on several projects.

I will also re-iterate a point that I've made in the past. The current LLMs are infants - ChatGPT has been out one year and has gone from text-only to being able to see, hear, speak and create imagery with its multi-modal upgrades. Q* and Google Gemini are the next evolution of this path and these advancements will continue well into the foreseeable future. More and more of the research papers that I read, speak of emergent behaviors in these models. We're in the very beginning of this emergence and what's coming will be spectacular.
 
Last edited:
  • Like
Likes cyboman and Greg Bernhardt
  • #20
Borg said:
More and more of the research papers that I read, speak of emergent behaviors in these models.

I would be very interested in more information on the emergent behaviors observed and how they are described in these papers.
 
  • #21
I'll keep an eye out the next time that I see it mentioned.
 
  • Like
Likes cyboman
  • #22
Open AI is launching $10 million in grants towards research on oversight of superhuman AI systems - something that they are calling Superalignment. This emergent behavior will not necessarily the emergence of human-like behavior.
We believe superintelligence could arrive within the next 10 years. These AI systems would have vast capabilities—they could be hugely beneficial, but also potentially pose large risks.

Today, we align AI systems to ensure they are safe using reinforcement learning from human feedback (RLHF). However, aligning future superhuman AI systems will pose fundamentally new and qualitatively different technical challenges.

Superhuman AI systems will be capable of complex and creative behaviors that humans cannot fully understand. For example, if a superhuman model generates a million lines of extremely complicated code, humans will not be able to reliably evaluate whether the code is safe or dangerous to execute. Existing alignment techniques like RLHF that rely on human supervision may no longer be sufficient. This leads to the fundamental challenge: how can humans steer and trust AI systems much smarter than them?
 
  • Like
Likes Filip Larsen
  • #24
Greg Bernhardt said:
That's likely at least 10 times too little.
Yeah, it doesn't seem like much especially when they state that some grants could be $2 million of the 10.
 
  • #25
  • Like
Likes Greg Bernhardt

FAQ: Is Q* the Key to Achieving AGI and Solving Math Problems?

Is Q* the ultimate key to achieving Artificial General Intelligence (AGI)?

While Q* shows promising results in solving complex math problems, it is not the sole key to achieving AGI. AGI requires a combination of various technologies and approaches to mimic human intelligence.

Can Q* solve all types of math problems?

Q* is designed to solve a specific class of mathematical problems known as "NP-complete" problems. While it excels in solving these types of problems efficiently, it may not be as effective in solving other types of math problems.

How does Q* compare to other AI algorithms in solving math problems?

Q* outperforms many traditional AI algorithms in solving NP-complete problems due to its unique quantum computing capabilities. However, its effectiveness may vary depending on the specific problem and the resources available.

Can Q* be used to solve real-world problems beyond math?

While Q* is primarily designed for solving math problems, its underlying quantum computing principles can potentially be applied to other real-world problems. Further research and development are needed to explore its full potential in various domains.

What are the limitations of using Q* for solving math problems?

Q* may face limitations in scalability, resource requirements, and applicability to certain problem domains. Additionally, the technology is still in its early stages, and further advancements are needed to overcome these limitations.

Similar threads

Replies
17
Views
801
Replies
0
Views
2K
Replies
8
Views
3K
Replies
4
Views
2K
Replies
1
Views
3K
Back
Top