It relates to (intuitive) precision.
See this link for an extended discussion -
General rule of thumb: It is not uncommon when making a single measurement to avoid all this painstaking process and simple assign one half the smallest scale division as the uncertainty. For instance, the smallest scale division of common pan balances is 0.1 g. In this case we would assume an uncertainty of +/- 0.05 g
Accordingly, a (neutral/slightly pessimistic[?]) option could be to assert that a stated measurement of X units is interpreted to mean X +/- 0.25 (units), eg 1.25 = 1.25 +/- 0.25
Of course, whether such a measurement is in fact precise enough to be useful is a different question.
PS - some terminologies in above link are maybe used rather "loosely", eg precision / accuracy are statistically 2 rather different "things".