DeepMind AI Develops Efficient Sorting Algorithms

In summary, Google's DeepMind has developed a system that can write efficient algorithms for small sets of data (2-8 elements) with minimal execution time and ample code memory resources. This was a common practice decades ago when computers had limited resources. However, with modern compiler and linker optimization techniques, this result can be used to create a library sort that is as specifically optimized as hand-selected code. While coding at the assembly level was once common, it is now unnecessary thanks to optimization methods like this. The number of elements being sorted can sometimes be known at compile time, but other times it may vary. In cases where optimization is crucial, it may be worth spending time on it rather than just getting a faster computer.
Technology news on Phys.org
  • #2
Tom.G said:
Some historical perspective:

In "modern" times, programmers sort using published libraries. But there was a time, just 30+ years ago, when people wrote their own sorts (for example). They were using machines with very limited memory resources and processor power. So, in resource-critical cases, development time was spent identifying ideal solutions - and it was common to go down to the assembly/register level looking for the optimum method to apply for a that particular project.

What "deepmind" has done is to find the most optimal sort solutions for cases where there are a small number of elements to be sorted (2 through 8) and when "optimal" is defined as minimal execution time with ample code memory resources. It's an exercise I performed for element counts of 2 through 6 when I was working out the details of that Zig Zag sort (same example).

But I haven't used my own sort in decades. It rarely makes any sense anymore.

However, there is definitely an advantage to be had with this deepmind result. With modern compiler and linker optimization techniques, a library sort can be created using this result that can select sort code that is as specifically optimized as the code that was hand-selected decades ago. So if you are sorting exactly 7 elements, it will replace your "sort(7,data,comp(&a,&b));" with a more customized "sort_7(data,comp(&a,&b));".
Coding at the Assembly level was once a common optimization practice. A key goal of this kind of optimization is to make assembly as an optimization method completely unnecessary.
 
  • #3
I can't possibly be old enough to re,member the times @.Scott is talking about - yet I do.

I "real life" - i.e. when someone is paying you to sort something. (or write a program to sort something) you typically have one of two problems: either the whole thing is unsorted, or it is mostly sortted with a few unsorted elements at the end.

These cases were generally identifiable "by hand" and if it made sense treated accordingly. Rareky did it make sense to try and go beyond this - the amount of time it would take to study tyhe situation to pick the right algorithm was usually large compared to the time difference between various algorithms.

It does no good to spend an hour trying to speed up a sort by 30 minutes. It may not even make sense to hire a team of programmers to speed up a sort by 30 minutes compared to just getting a faster computer.
 
  • #4
.Scott said:
if you are sorting exactly 7 elements
How often will the number of elements being sorted be known at compile time?
 
  • #5
.Scott said:
Coding at the Assembly level was once a common optimization practice.
I am writing assembly code at the moment because I demand time critical control of external signals while all interrupts are disabled, and exceptions impossible.

An optimising compiler can get close enough to perfection for most applications.
I do notice with these AI generated sorts that it takes more time to prepare to save a cycle than will ever be saved by the slightly improved process. KISS.
 
  • #6
PeterDonis said:
How often will the number of elements being sorted be known at compile time?
Sometimes it is very, very stable. The number of hours in a day hasn't changed since the invention of computers. The number of states in the US has been stable for more than half a century. Even the number of schools in the Big Ten is constant on the time scale of compilimg.

But....

The most likely spot to find it is in a divide-and-conquer algorithm whrer you break N items into groups of M. N can vary but M can stay constant.
 
  • Like
Likes PeterDonis
  • #7
PeterDonis said:
How often will the number of elements being sorted be known at compile time?
An easy example that answers your question, but not the general point:
You decide that you will get sufficiently reliable results if you take the median value of 7 samples.
So you sort seven results and take the middle value.

But, if optimization was that important, you could optimize the entire 7-sample median algorithm.
 
  • #8
Vanadium 50 said:
It does no good to spend an hour trying to speed up a sort by 30 minutes. It may not even make sense to hire a team of programmers to speed up a sort by 30 minutes compared to just getting a faster computer.
And sometimes it does.
So I need the median value from a list of 256 integer values. The total amount of time spent developing, documenting, and implementing the final optimized algorithm is several man-weeks - and the final result is very fast but only accurate enough (perfect over 99% of the time).
The un-optimized version took about 300 usec. The optimized version takes a couple of usec.
But it gets executed 20 times per second in a consumer automobile radar unit and 300usec was way over its time budget.
 
  • Like
Likes Tom.G

Related to DeepMind AI Develops Efficient Sorting Algorithms

1. How does DeepMind AI develop efficient sorting algorithms?

DeepMind AI develops efficient sorting algorithms through a process known as reinforcement learning. The AI is trained on a large dataset of examples and learns to optimize its sorting algorithms based on feedback received during training.

2. What are the benefits of using DeepMind AI for developing sorting algorithms?

The benefits of using DeepMind AI for developing sorting algorithms include the ability to quickly and efficiently optimize algorithms, leading to faster and more accurate sorting results. Additionally, the AI can adapt to different types of data and sorting tasks, making it versatile and adaptable.

3. Can DeepMind AI outperform traditional sorting algorithms?

Yes, DeepMind AI has been shown to outperform traditional sorting algorithms in terms of efficiency and speed. The AI is able to learn and adapt to different types of data and sorting tasks, allowing it to optimize its algorithms for specific scenarios.

4. How does DeepMind AI compare to other AI systems in developing sorting algorithms?

DeepMind AI is known for its advanced capabilities in developing sorting algorithms, outperforming many other AI systems in terms of efficiency and accuracy. The AI's use of reinforcement learning allows it to quickly adapt and optimize its algorithms for various sorting tasks.

5. What are the potential applications of DeepMind AI-developed sorting algorithms?

The potential applications of DeepMind AI-developed sorting algorithms are vast and varied, including data processing, image recognition, natural language processing, and more. These algorithms can be used to optimize and streamline a wide range of tasks that involve sorting and organizing data.

Similar threads

  • Programming and Computer Science
Replies
9
Views
929
Replies
10
Views
1K
  • Programming and Computer Science
Replies
25
Views
2K
  • Programming and Computer Science
4
Replies
107
Views
6K
Replies
2
Views
1K
  • Programming and Computer Science
Replies
1
Views
724
  • Programming and Computer Science
2
Replies
63
Views
12K
  • Programming and Computer Science
Replies
29
Views
4K
  • Programming and Computer Science
Replies
8
Views
2K
Replies
8
Views
1K
Back
Top