I think a lot of the confusion about units could be resolved by adopting two fairly simple but uncommon conventions:
1) all mathematical expressions should express truths about pure numbers (i.e., dimensionless quantities), and
2) all constants should be replaced by conventional values of the observables and dimensionless numbers required to make the conventions self-consistent.
When we do this, we would replace, for example, Newton's force of gravity, normally written F = GMm/d2, with F/F* = (M/M*)(m/m*)/(d/d*)2. All the subscripts * mean "the conventional value" for that observable, and note the conventions must be self-consistent in the sense that if all the quantities take on their conventional values, the equation must hold.
It is obvious from simple grouping of the terms what the value of G is, in terms of the conventional quantities, and that's all G ever was-- the value you get from a collection of self-consistent conventional choices. So although the form I suggest looks more complicated (and that's why it isn't used), it has conceptual advantages-- we pay a conceptual price for using "G". The form I suggest expresses two independent types of information-- it shows the functional dependences that characterize the law, and it also explicitly indicates a self-consistent convention has been adopted. The usual form focuses on the former goal, but compromises the latter, and obscures the role of convention in the whole concept of what a unit is.
Note also that we can recover the simple form, indeed a simpler form, by simply adopting the implicit convention that what we mean by any of the variables is actually their ratio to the conventional choices, so F is actually F/F* where the value F* is assumed to be implicit, and then the force of gravity becomes simply F=Mm/d2. This is the "business end" of the expression, the use of "G" is just a confuser and only adds tedium to doing physics problems. This form of the equation is I believe the reason that Fourier said that the physics is independent of the units-- all we need to do is do observations to tell us what a consistent convention is, and then we never need units in the equations of physics.
So what happened to the "G" in this form of the equation? Apparently, we don't need G if we reference all quantities to a convention that is consistent with the equation. So the entire reason for the presence of "G" is that we don't usually do that-- we choose our conventions for force, mass, and distance in an arbitrary way that is not consistent with the force of gravity. There's a reason for that-- the kinds of masses and distances we generally deal with yield negligible gravitational forces, so our self-consistent force convention would correspond with a very tiny force, and our actual forces would seem huge by comparison. But these are contexts where we don't calculate the force of gravity in the first place, we just use mg, saving us from having to measure the mass of interest and the mass of the Earth in the same units. We can still do that-- just use m/m* and a/g for masses and accelerations, and F=ma becomes F/F* = m/m* a/g, where F* = m* g is the self-consistent convention. When that convention is implicit, we again have F = ma, but now the quantities are dimensionless-- they are ratios to the self-consistent convention that generalize from a conventional observation to any other observation.
We don't usually do this because our everyday values are generally not self-consistent with the equation we want to use them in. Then we need constants that have dimensions in our equations-- but it is a high price to pay, because there is an actual lesson in the smallness of the gravitational force, and we completely miss that lesson when we select inconsistent unit conventions and have to include constants of conversion in our equations. I think the conceptual price we paid to get everyday kinds of numbers is too high-- I think we made the wrong choices for our unit conventions, and we pay the price of obscuring some of the more important lessons of physics by doing that.
Now, it should be mentioned that there will not be one single set of conventional values that will be consistent with all the equations of physics-- we still have to choose which equations we want to use to set the consistency of the conventions, and then other equations will have to include dimensionless constants (like the fine structure constant) to allow that consistency to continue to hold. But there is a lesson in these dimensionless constants-- they are pure numbers, so in a sense are "numbers that nature knows", and their values are meaningful independent of our conventions. Again, by choosing inconsistent conventions, we miss this lesson, the lesson of the dimensionless constants that nature actually exhibits-- they get lost in all the G, and k, and epsilon and so on.
An alternative is using "rational" units, which many theoretical physicists, who don't want to miss these lessons, do all the time. But they are not viewed as practical for everyday usage, as they don't translate well to people who want numbers they can picture from experience like square meters and kilograms and seconds. It's a compromise made to the engineers, in effect, but it obscures the meaning of the physics, and I think it was a mistake. It's basically the mentality that you "take the theory to the observations", meaning it is the theorists job to package everything in the language of the observer so the observer can test it without understanding what it is really saying. I think that's wrong-- I think the purpose of the theory is to understand the observations, so the observations must be converted into the language of the theory as a key step in understanding them. The observations are the reality, yet we must process them to understand their lessons, they are not just means of testing theories that need to be dumbed down into everyday numbers.