[Gambas-user] How to determine the order of magnitude of a Long variable

Bruce adamnt42 at gmail.com
Thu May 13 17:31:35 CEST 2021


On 14/5/21 12:36 am, Tobias Boege wrote:
> On Thu, 13 May 2021, Bruce wrote:
>> I need to determine how many blocks (sectors, writeable chunks, whatever)
>> are available on a USB "drive" (stick, sdd, whatever).
>>
>> I can get a reasonably accurate value Shell'ing the lsblk command. However
>> the value could be anywhere between 0 and the total number of usable blocks
>> on the device (i.e. the "formatted size").
>>
>> In order to determine whether the data I am about to write to the device
>> will fit I need to know the order of magnitude of these free blocks - kB,
>> mB, gB, TB, etc etc.
>>
>> This is because the utilities that do the writing wont give the size of the
>> data in formats other than the so-called "human-readable format" e.g 3.25G
>> or 4GB or any number of other variants, which may by the way be based on
>> 1024 or 1000 byte scales. (Grrr!)
>>
>> In that first case, that 3.25G could be any number of bytes between (3.0
>> gigabytes)+1 and (3.5 gigabytes)-1 but that might not fit onto the drive if
>> it tries to write into the "last block on the device".
>>
>> My goal, in case you are wondering, is to keep the maximum number of daily
>> backups as possible on the drive and the Gambas project will be run by a
>> cron job (so there is np user interaction).
>>
>> To be safe I thought I'd take the biggest "about to write" size i.e the
>> smalllest scale (I think) convert that to a "Required Blocks" value and
>> compare it to the actual AvailableBlocks. This does not work for various
>> mathematical reasons. So I need to do it the other way around i.e. is the
>> number of AvailableBlocks absolutely larger than the needed blocks.
>>
>> I have searched the internet for something like this with poor results. So
>> in desparation... does anyone have any idea how to get a reliable
>> comparison?
>>
> 
> I would first like to ask back if you can use `df` instead of the tools
> you mentioned. Depending on how your backups work, you have to mount the
> drive (or you don't) before starting the backup. If the filesystem is
> always mounted on /mnt/backup, you can use
> 
>    $ LC_ALL=C df --output=avail /mnt/backup
>       Avail
>    1234567890
> 
> This number is in bytes and the path to the mountpoint for your drive
> will be available to your script anyway.
> 
> Also, `man lsblk` tells me that the tool understands a --bytes switch
> to print bytes instead of the human-readable string with unclear units
> which you have trouble with.
> 
> Best,
> Tobias
> 

Hiya Tobi,

The reason for using "lsblk" is that I can get the number of free 
blocks. AFAIK df only gives the free "size" which is slightly different 
to blocks.

I need to work in blocks (I think) is because the whole nightly backup 
"series" creates a  6 databases and a bunch of other data (text files 
and images). I am trying to maximise the number of "cycles" on the 
backup disk, where a "cycle" is one of the daily "series".

At the moment I am just guessing that I need about 15% free space on the 
backup drive before attempting the backup. But this is terribly 
inaccurate resulting on odd occasions of the backup series failing due 
to lack of space or the deletion of more than the required number of 
older cycles. Neither of which is optimal.

I know I'm being finicky, but after 3 HDD failures I am trying to keep 
as much backup material as possible. (The backup USB drive is a 100GB 
Kingston)

regards
bruce


More information about the User mailing list