Recent Linux News and Intel Indirect Branch Tracking

In summary, the article discusses the potential addition of Indirect Branch Tracking by default in the Linux kernel. This change has been picked up by the x86/core branch, and is now on deck for submission with the next month's Linux merge window.
  • #1
15,048
9,563
TL;DR Summary
Intel added a feature to protect against certain types of hacker attacks and now Linux is planning to enable it.
https://www.phoronix.com/news/Linux-IBT-By-Default-Tip

As an enhancement to the out-of-the-box Linux kernel in its default x86_64 configuration, it was being eyed to enable Indirect Branch Tracking by default. That change to enable IBT by default has been picked up by TIP's x86/core branch, thus putting it on deck as material for submitting with next month's Linux 6.2 merge window.

Indirect Branch Tracking is part of Intel Control-Flow Enforcement Technology (CET) with Tigerlake CPUs and newer. IBT provides indirect branch protection to defend against JOP/COP attacks by ensuring indirect calls land on an ENDBR instruction.

IBT is part to Control Flow Integrity strategy/standard:

https://en.wikipedia.org/wiki/Control-flow_integrity
 
Computer science news on Phys.org
  • #2
I wonder how different things would be if the 80286 protection model caught on.
 
  • #3
I thought for sure people would be piling on after my defense of the much-hated 80286.
 
  • #4
Vanadium 50 said:
I thought for sure people would be piling on after my defense of the much-hated 80286.
Nah. Many of us avoided Intel in those days.
 
  • Like
Likes pbuk
  • #5
Vanadium 50 said:
I wonder how different things would be if the 80286 protection model caught on.
Hmm. My first PC had an 8086, but then I moved to Sun Sparc with proper virtual memory management and stopped using MS-DOS totally in favour of Solaris.

I didn't hear people (anyone, actually) whinging about Sparc + Solaris.
 
  • Love
Likes nsaspook
  • #6
The problem with the 80286 is that it had "proper" memory management. A number of tricks were used in 8086 code to access more than 64K efficiently, and these tricks didn't work with the 80286. Neither the 68000 nor the SPARC had the issue of a legacy chip with lots of legacy code.
 
  • #7
Vanadium 50 said:
The problem with the 80286 is that it had "proper" memory management. A number of tricks were used in 8086 code to access more than 64K efficiently, and these tricks didn't work with the 80286. Neither the 68000 nor the SPARC had the issue of a legacy chip with lots of legacy code.
The problem with the 80286, is that its memory management was just too slow if you want more than 64 Kb data or code per process.
If you wanted the access more memory in the same process, you need to load a segment register, this involved loading 6 bytes from a segment descriptor table in main memory.

If you have more than 64 kb of code, you also need to load a segment register on a jump, call or return.
There's no way this could have ever caught on.
 
  • #8
So, in the 8080 days you had 64k and that was it.

For the 8086/8 you had multiple 64k segments. There was no memory protection and segments could overlap. If you needed 128 k of memory, you'd set up three segments: the top 64 kb, the bottom 64 kb and one in the middle so there wasn't a huge penalty for access just above and just below the line.

When the 80286 came along, you had memory protection and replaced segments with selectors. They couldn't overlap, because that would defeat the protection. And that broke a lot of code. The thinking at the time was that memory selectors would usually refer to a small piece of memory - much smaller than the 64 k. This was wasteful, but since there were 64x as many virtual addresses as real addresses, this didn't seem like a problem.

I maintain that if we adopted this model, the kind of exploits discussed would be much rarer.

So what went wrong? C. (IMO)

C was becoming wildly popular around this time. And one of the things C gave you was a high-level language with pointer arithmetic. And in this model pointer arithmetic, particularly across selectors, tricky. The way that the 80286 implements selector and offset made this particularly tough. But basically, C had a worldview of a single giant, unprotected address space.

Worse, the one OS available at the IBM PC/AT's launch that supported protected mode was Xenix - Microsoft's Unix. And unix is married to C.

I believe that had we gone down this path, we wouldn't have so many of these kinds of exploits. I am sure we would have a completely different set of problems. Just not these ones. But we're still living with consequences of decisions made over 35 years ago.
 
  • #9
Vanadium 50 said:
So, in the 8080 days you had 64k and that was it.
If you could afford it. 2K was as far as my budget went 😢

Vanadium 50 said:
I believe that had we gone down this path, we wouldn't have so many of these kinds of exploits. I am sure we would have a completely different set of problems. Just not these ones. But we're still living with consequences of decisions made over 35 years ago.
Well Indirect Branch Tracking is not really about cross-segment exploits, but I think I get what you mean.
 
  • #10
pbuk said:
If you could afford it. 2K was as far as my budget went
My predecessors at GE make a digital boiler control system that had two 24 bit words as the only memory other than a magnetic drum. Circa 1963. They did it and it worked!

I remember my first year at GE. Core memory cost $1 per bit. 1K of memory cost the same as an engineer's annual salary.
 
  • #11
pbuk said:
If you could afford it
I could only afford the zeros. The ones were too expensive for me.

Yes, maybe this isn't the best example. But we have had a steady stream of attacks over the years, and manyb of them rely on a lack of memory prediction. Having code write anywhere in memory is a Bad Idea. Being able to execute data (in data segments) is a Bad Idea. Getting rid of all this security nonsense so our code will run incrementally faster is a Bad Idea.
 
  • #12
Vanadium 50 said:
Having code write anywhere in memory is a Bad Idea.
There was always a problem with that. One man's code is always another man's data.

We compiled source to object, then linked object to executable, then sprinkle pixie dust on the result and data becomes code.
 
  • #13
anorlunda said:
One man's code is always another man's data.
This is a solved problem. The simplest case is reading a program (data) from a disk. The modern solution is the "NX bit", whereby a region of memory be marked as "not executable". The ancient solution was, I believe, to copy the data into the code segment.

The basic idea, though, is that one can put layers and layer of protection on the code. The more you do this, the more protected you are against mischief, but the more work it is to code. There may be a performance hit as well.
 
  • #14
The ancient way was to mix data and program code. My old GE boss was a master of GE/Honeywell 6000 macro assembler code and would reuse his initialization code areas for data buffers making his code arguably the most compact and difficult to debug once an error occurred.
 
  • Like
Likes anorlunda
  • #15
jedishrfu said:
The ancient way was to mix data and program code.
Well, yes, but that wasn't very secure.

I once wrote code that deliberately executed the stack.
 
  • #16
We had no notion of a stack on 1960's mainframe code. Subroutine calls were linked using registers which was fast but you could never recurse as it would create an infinite loop.

Security was not an issue, we had user mode and master mode only with Timesharing having additional restrictions on what could be done.
 
  • #17
jedishrfu said:
Security was not an issue
Sure it was. One solved by a lock on the door. :smile:
 
  • #18
As John Wayne once said in the The Quiet Man:

There'll be no locks or bolts between us Mary Kate... except those in your mercenary heart.

And so it was at GE, we had badges those funny things were given to us to wear and one dude came up with a piece of card over his crotch area with the badge hanging from it. We had one stair well where if you were to enter it without your badge you had a choice of waiting for some poor soul to resuce you or set off the fire alarm and explain it to your manager after everyone exitted the building.
 
  • #19
Vanadium 50 said:
I once wrote code that deliberately executed the stack.
Sort of related to the famous buffer overrun problem. I worked at Microsoft, in the Windows Division, between '99 and '05. Toward the end of that time, Microsoft spent a lot of money sending every employee in the division (about 7500 of them) to a three-day training session to make employees aware of security.
Following that training, the VS team added several C standard library functions to replace the insecure versions such as scanf() and others with more secure versions that had a parameter that indicated the maximum allowable length of the input data. The unsecure versions were deprecated.
 
  • #20
Wow, that sounds a lot like similar development in the NLS world where we had to worry about multibyte characters and could no longer depend on str* related functions.

The null byte terminator could be found in some widechar codesets terminating strings abruptly.
 
  • #21
I just ran into a language called "Rust", which is intended to be memory safe. I can't tell how well it meets these goals, but think that applications-level programming (as opposed to systems-level programming) that enforces good practices is not a stupid idea.

Malformed requests for web pages should not grant access to system-level resources. They just shouldn't.
 
  • #22
Vanadium 50 said:
I once wrote code that deliberately executed the stack.
Ah, good times...
 
  • #23
Vanadium 50 said:
I just ran into a language called "Rust", which is intended to be memory safe. I can't tell how well it meets these goals, but think that applications-level programming (as opposed to systems-level programming) that enforces good practices is not a stupid idea.
Yes, I've been looking at Rust for a while although I'm interested in it for WebAssembly where the target is a safe VM anyway.

Vanadium 50 said:
Malformed requests for web pages should not grant access to system-level resources. They just shouldn't.
There should be no need to worry about this in your own code: any application code should be hidden from the web by a robust proxy (typically NGINX or LiteSpeed).
 
  • #24
I haven't written a line of Rust, so I don't know how good it is. Certainly the idea is good. I can see that some decisions need to be made in the implementation that have pros and cons. But I do think that if we want to get improved security we need to get away from the "one giant shared address space" model that C has. Pointers are not integers. :wink:

I am actually kind of surprised that virtualization hasn't penetrated further. If I go to Google, or AWS or whatever, there is no trouble setting up a web server that runs in a VM. So you'd think that would be the default, or at least a simple option on a physical computer running RHEL or other similar distribution. I'm sure it's possible. But I'd expect it to be simpler to set up - i.e. the default.
 
  • #25
Vanadium 50 said:
I am actually kind of surprised that virtualization hasn't penetrated further. If I go to Google, or AWS or whatever, there is no trouble setting up a web server that runs in a VM. So you'd think that would be the default, or at least a simple option on a physical computer running RHEL or other similar distribution. I'm sure it's possible. But I'd expect it to be simpler to set up - i.e. the default.
What would be the benefit? A cloud service's virtualization platform is there to isolate each VM from each other, you can still segfault or kernel panic within a VM.

Of course if you want you can run all your desktop applications in separate VMWare/VirtualBox VMs (with 16 cores and 64GB RAM I tried this once for laughs. It wasn't that funny). And of course there is Docker.
 
  • #26
pbuk said:
What would be the benefit? A cloud service's virtualization platform is there to isolate each VM from each other,
Exactly. Today running multiple VMs is common, or at least not rare. It would not be perfect, but it would be better. With relatively low penalties.

pbuk said:
kernel panic
I've actually initiated a double panic once. I viewed it as a badge of pride.
 
  • #27
Kernel panics are nothing compared to bringing down a mainframe with hundreds of timeshare users and many multi-activity batch jobs that need to be restarted at the point of failure. This happened to me early in my programming career while using a a well known subroutine in superuser mode that had a coding issue and couldn't run in that mode.
 

FAQ: Recent Linux News and Intel Indirect Branch Tracking

What is the significance of Intel Indirect Branch Tracking in recent Linux news?

Intel Indirect Branch Tracking is a security vulnerability that affects Intel processors. It allows an attacker to gain access to sensitive information through a side-channel attack. This vulnerability was recently discovered and has been making headlines in the Linux community as it requires updates to the Linux kernel in order to mitigate the risk.

How does Intel Indirect Branch Tracking work?

Intel Indirect Branch Tracking works by exploiting the way modern processors predict and execute code instructions. It takes advantage of the processor's branch prediction mechanism to leak sensitive data from the processor's microarchitecture. This can include information such as encryption keys and passwords.

Is my Linux system at risk from Intel Indirect Branch Tracking?

If you have an Intel processor, then your system is at risk from Intel Indirect Branch Tracking. However, the risk can be mitigated by updating your Linux kernel to the latest version that includes the necessary security patches. It is important to regularly check for and install these updates in order to protect your system from this vulnerability.

Are there any other vulnerabilities related to Intel Indirect Branch Tracking?

Yes, there are other vulnerabilities related to Intel Indirect Branch Tracking, such as Spectre and Meltdown. These vulnerabilities also exploit the branch prediction mechanism in modern processors and can be mitigated by updating the Linux kernel. It is important to stay informed about these vulnerabilities and regularly update your system to protect against them.

What steps should I take to protect my Linux system from Intel Indirect Branch Tracking?

The most important step to protect your Linux system from Intel Indirect Branch Tracking is to regularly update your Linux kernel to the latest version. Additionally, you can also enable kernel page-table isolation (KPTI) on your system, which is a security feature that isolates the kernel from user processes. It is also recommended to keep your system and software up-to-date and to be cautious of downloading and running untrusted code or opening suspicious emails and links.

Back
Top