I am assuming log2-transformation here.
If you do the reverse transformation of the data (power[2,data_point]), see what kind of numbers you get in return. If many of the resulting numbers are integers, or end in many .999999 (also means integers that look as fraction numbers because of rounding), that means your data was log2-transformed.
If the original data was not made of integers, the trick above won't help. Still, you can check the data distributions of the data you have and power-transformed values. If the former is closer to normality than the latter, that would likely mean the data is already transformed. Conversely, if the latter ends up with a better normal distribution, it would likely mean that the data is not transformed.
If there are plots in the paper based on data, you can compare their scale with your data scale and see whether they match.
None of these approaches will give you a definitive answer, but you may get a clue until you hear back from the original authors.
the standard assumption is that the submitted data is in the original (non-transformed) format
Another suggestion: look at the statistics perhaps it has a mean or standard deviation reported, now look at raw data, sometimes just by eyeballing it you can tell whether the data is original or log-transformed.
If you post the link to the article you may get better answers... Following Istvan's comment, if the raw data has values above, say, 100 there's a good chance that it's raw since 100 in log2 space is about 1e30 on the linear scale (although I don't know what the range of UPLC is...).