Using Cuda with gFortran - Is it Possible?

  • Fortran
  • Thread starter aihaike
  • Start date
  • Tags
    Gfortran
In summary, the conversation discusses the use of GPU for coding, specifically CUDA with different graphic cards. It is mentioned that CUDA is not free and only works with NVIDIA graphics cards. It is also stated that PGI Fortran compiler can be used for CUDA implementation. The conversation ends with a question about whether CUDA is implemented in the gnu compiler, to which it is clarified that it is not.
  • #1
aihaike
55
0
Dear all,

I wish to implement some part of my codes to use the GPU of my graphic card but I have no idea whether GNU as already implemented it (as for OpenMP). I mean, there are PGI Fortran compiler which embedded CUDA but I figure it's not free.
I also wonder if that works only with ATI graphic cards or if for instance I can use my Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller.



Eric.
 
Technology news on Phys.org
  • #2
CUDA only works with NVIDIA graphics cards. And even then you are unlikely to see gains unless you have something reasonably modern (I'd say that GeForce 9800 is the bare minimum that you need). ATI cards have a different programming interface, which is similar but distinct. The Intel controller won't support either.

You will most likely have to code by hand, but there may be some third party computation libraries that use CUDA internally.
 
  • #3
Thanks,
Now suppose I have a ATI card, is CUDA implemented in gnu compiler (gfortran)?

Eric.
 
  • #4
I reiterate: CUDA only works with NVIDIA graphics cards. :) To my knowledge, it is not implemented in the gnu compiler. You'd have to use PGI.
 
  • #5
That's exactly what I was wondering.
Thanks.
 

Related to Using Cuda with gFortran - Is it Possible?

1. Can I use CUDA with gFortran?

Yes, it is possible to use CUDA with gFortran. CUDA is a parallel computing platform and programming model developed by NVIDIA, and gFortran is a popular compiler for the Fortran programming language. Together, they can be used to write high-performance code for scientific and engineering applications.

2. What are the benefits of using CUDA with gFortran?

The main benefit of using CUDA with gFortran is the ability to harness the power of GPU (Graphics Processing Unit) for parallel computing. GPUs are designed to perform large amounts of calculations in parallel, which can significantly speed up complex scientific and engineering simulations.

3. Is it difficult to learn how to use CUDA with gFortran?

It can be challenging to learn how to use CUDA with gFortran, especially if you are not familiar with parallel programming concepts. However, there are many online resources, tutorials, and forums available to help you get started. It is also recommended to have a good understanding of Fortran and basic programming principles before delving into CUDA.

4. Are there any limitations when using CUDA with gFortran?

One limitation of using CUDA with gFortran is that it only works with NVIDIA GPUs. Additionally, not all Fortran code can be easily parallelized using CUDA, so it is essential to carefully choose which parts of your code will benefit from GPU acceleration.

5. Can I use CUDA with gFortran on any operating system?

CUDA is primarily supported on Windows, Linux, and MacOS, and gFortran is compatible with all of these operating systems. However, some features may be limited or not available on certain platforms, so it is best to check the system requirements before starting to use CUDA with gFortran.

Similar threads

  • Programming and Computer Science
Replies
11
Views
2K
  • Programming and Computer Science
Replies
3
Views
1K
  • Computing and Technology
Replies
2
Views
2K
Replies
4
Views
1K
  • Programming and Computer Science
Replies
1
Views
5K
  • Programming and Computer Science
Replies
1
Views
3K
  • Programming and Computer Science
Replies
13
Views
6K
  • Programming and Computer Science
Replies
9
Views
8K
  • Programming and Computer Science
Replies
1
Views
1K
Back
Top