Use of 'l' or 'L' and 'u' or 'U' infront of the right data type.

  • Thread starter dE_logics
  • Start date
  • Tags
    Data Type
In summary, Hungarian notation is a system of naming variables by adding a prefix to indicate their data type. It was originally intended to help with readability and error checking, but has since become less popular. Some argue that it is still useful in certain situations, while others believe it is unnecessary.
  • #1
dE_logics
742
0
We put a 'u' in front of an unsigned value or 'l' infront of a long value...etc...different for different datatypes, but what's the utility of this?
 
Technology news on Phys.org
  • #2
Sounds like a form of Hungarian notation. It's supposed to remind you of the type of the data. I've read a good argument that Hungarian notation as it was originally intended ("Applications Hungarian") was a good idea, but Hungarian notation adapted to denote data types only ("Systems Hungarian") was not.
 
  • #3
So it actually has no use apart from making the program machine readable.
 
  • #4
It's for making the program HUMAN readable.

Hungarian notation is somewhat useful when you don't have an IDE to help you identify potential typing errors or remind you what the scope of a variable is.

It's somewhat frowned upon these days, especially the microsoft variety of m_intPosX.
 
  • #5
Oh sorry, human readable :smile:
 

Related to Use of 'l' or 'L' and 'u' or 'U' infront of the right data type.

1. What is the difference between using 'l' or 'L' and 'u' or 'U' in front of a data type?

The use of 'l' or 'L' and 'u' or 'U' in front of a data type indicates the type of the variable or value. 'l' or 'L' is used for long data types, while 'u' or 'U' is used for unsigned data types. This helps to specify the size and range of the data type.

2. Can I use 'l' or 'L' and 'u' or 'U' interchangeably in front of a data type?

No, 'l' or 'L' and 'u' or 'U' have different meanings and should not be used interchangeably. Using the wrong specifier can lead to unexpected results or errors in your code.

3. When should I use 'l' or 'L' and 'u' or 'U' in front of a data type?

You should use 'l' or 'L' and 'u' or 'U' in front of a data type when you need to specify the size and range of the data type. This is particularly important when dealing with large numbers or when precision is necessary.

4. Are there any data types that do not require 'l' or 'L' and 'u' or 'U' in front?

Yes, some data types do not require 'l' or 'L' and 'u' or 'U' in front. For example, integer data types such as int and short do not require a specifier, as they have a default size and range.

5. What are the consequences of not using 'l' or 'L' and 'u' or 'U' in front of a data type?

Not using 'l' or 'L' and 'u' or 'U' in front of a data type can lead to unexpected results or errors in your code. It can also result in a loss of precision or overflow if the data type cannot hold the desired value. It is important to use the correct specifier to ensure the proper handling of data in your code.

Similar threads

  • Programming and Computer Science
Replies
1
Views
774
  • Programming and Computer Science
Replies
10
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
  • Programming and Computer Science
Replies
14
Views
1K
  • Programming and Computer Science
Replies
11
Views
2K
  • Programming and Computer Science
Replies
2
Views
1K
  • Programming and Computer Science
Replies
1
Views
648
  • Programming and Computer Science
Replies
22
Views
1K
  • Programming and Computer Science
Replies
19
Views
2K
  • Programming and Computer Science
Replies
6
Views
940
Back
Top