Recovering Lost Bitcoins: FPGA vs GPU Performance Comparison

In summary, the individual is seeking advice on recovering lost bitcoins by brute forcing with only 24 out of 32 bytes of their private key. They are unsure of whether an FPGA or GPU would be more suitable for this task. The expert suggests that using a GPU would be more feasible and provides calculations to support this. They also suggest that learning and implementing an FPGA solution may take too long.
  • #1
Stonestreecty
20
3
TL;DR Summary
I have a question regarding FPGA performance vs GPU (I've reviewed this). I’m trying to recover lost bitcoins that I mined in the early days. Do you think an FPGA like this could do this in a reasonable time?
Hello all,
I have a question regarding FPGA performance vs GPU (I've reviewed it before). I’m trying to recover lost bitcoins that I mined in the early days. I knew it was important to keep the private key but in the end I somehow managed to lose my private key but I still have 24 out of 32 bytes of my private key, found on half a piece of paper when I printed that private key back in 2012.

So I have 24 bytes out of the total 32 bytes of my private key. I can only recover this by brute forcing. But I’m not familiar with FPGA and I’m totally unsure how fast they would be able to do these calculations.

The required calculations would be incrementing a 256 bit number (starting at the lower boundary of the 24 bytes out of the 32 I have), doing the elliptic curve calculation in order to get a public key and then ripemd160(sha256(publicKey)) and compare the resulting hash160 with my address hash160. If they are equal I found my private key and I can recover my bitcoins.

Do you think an FPGA like this could do this in a reasonable time? I don't mind if it takes a year for example but there is no point in doing this if it takes > 100 years... I’m trying to figure out if it’s worth going with an FPGA for this in order to recover +- 110 BTC. Maybe I need too many FPGA’s and it might not be worth it… Or do you think high end GPU’s like an nvidia 1080TI will be better suited for the job?

If you think an FPGA can certainly be used for this. What kind of FPGA am I looking at, how much do they cost and how many would I need?

Best regards
 
Computer science news on Phys.org
  • #2
Having looked into both in the past but with no direct experience with either, my opinion is that learning, designing, and implementing an FPGA solution that works will take a substantial portion (all?) of your one year time frame.

Go with the GPU; they already have the pipelining, etc. that you would need to implement in an FPGA.

Good Luck!
 
  • #3
@Stonestreecty, it's infeasible for you to test ##256^8## possibilities for being your lost member of the blockchain. To test all of them, e.g. at the rate of 1 gigabit of possibilities/sec, it would take ##256^8 / 1,000,000,000[poss/sec] / 60[sec/min] /1440[min/day]/365.25[day/yr]## 584.54 years, and on average, it would take about 50% of that time to find your quarry.
 
Last edited:

FAQ: Recovering Lost Bitcoins: FPGA vs GPU Performance Comparison

What is the difference between an FPGA and a GPU?

An FPGA (Field Programmable Gate Array) is a type of integrated circuit that can be programmed to perform specific tasks. It is highly customizable and can be reconfigured multiple times. On the other hand, a GPU (Graphics Processing Unit) is a specialized processor designed for high-speed graphics rendering and parallel computing. FPGAs are more flexible and can handle a wider range of tasks compared to GPUs, which are more optimized for specific tasks.

Which one is better for parallel computing - FPGA or GPU?

Both FPGA and GPU are capable of parallel computing. However, the optimal choice depends on the specific application. FPGAs are better for highly customized and specialized tasks, while GPUs are more suitable for general-purpose parallel computing.

Can FPGAs be used for machine learning and deep learning applications?

Yes, FPGAs can be used for machine learning and deep learning applications. They offer high parallel processing capabilities and can be programmed for specific neural network architectures, making them ideal for accelerating these types of tasks.

What are the advantages of using an FPGA over a GPU?

One of the main advantages of using an FPGA over a GPU is its flexibility. FPGAs can be reconfigured and customized for specific tasks, making them more adaptable to changing requirements. They also offer lower power consumption and higher performance for certain applications.

Are FPGAs more expensive than GPUs?

Yes, FPGAs are generally more expensive than GPUs. This is because FPGAs are highly customizable and require specialized programming tools, which add to the overall cost. However, for certain applications where performance and flexibility are critical, the investment in an FPGA may be worth it.

Back
Top