Re: [PATCH v2 2/5] parse-options: introduce precision handling for `OPTION_INTEGER`

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Patrick Steinhardt <ps@xxxxxx> writes:

> diff --git a/parse-options.h b/parse-options.h
> index 997ffbee805..8d5f9c95f9c 100644
> --- a/parse-options.h
> +++ b/parse-options.h
> @@ -92,6 +92,10 @@ typedef int parse_opt_subcommand_fn(int argc, const char **argv,
>   * `value`::
>   *   stores pointers to the values to be filled.
>   *
> + * `precision`::
> + *   precision of the integer pointed to by `value`. Should typically be its
> + *   `sizeof()`.

The fact of the integer allowing to store up to 16-bit vs 32-bit, is
that "precision"?  "My --size option runs up to 200,000, what value
should I set it to?" is a natural question the readers of this
sentence would have in their mind, if we call it "range" or
something (which might not be a bad thing to have, but that is
totally outside the theme of this topic).

In any case, include a phrase "number of bytes" somewhere in the
description to make it clear what unit we are counting.

Are their common use case where this number is *not* its sizeof()
already in the codebase?





[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux