I am struggling with the plausibility of this question.
First with that fact that you seem to think that knowledge of C++ rather than C is any sort of a barrier - if you don't know how to do it in C then you don't know how to do it in C++ either, so you cannot really blame that.
Second with the fact that this is a real work requirement rather than an academic assignment. It is far too trivial for that, and in a work environment you would normally have more experienced colleagues to turn to rather than a forum.
Thirdly that you assert that this is a problem of "level detection" when it is in fact one of "edge detection". You need to know when the signal changes from 1 to 0.
Anyhow, the answer to your question will depend entirely on the hardware and target platform you are coding for. It may also depend on the frequency range of the signal, the mark-space ratio, and the accuracy required. A generic C/standard library solution might be possible, but may only be practical for suitably slow signals with a low precision requirement depending on the target and what other tasks must be carried out while timing.
Let us say for example that the target is a microcontroller. In that case the best solution would be to connect the signal to an on-chip timer/counter gate such that the signal latches the count which will directly measure the length of the last cycle with zero software overhead. Most timer-counter peripherals will generate an interrupt when the count is latched, so your code can get on with other work independently.
If it is not possible to connect the signal to a timer/counter, then the second best option would be to connect the signal to an external interrupt pin, and set it to generate an interrupt on a falling edge of the signal. The interrupt handler can then read some internal clock source (an on-chip timer) to determine the period since the previous edge. This still has a low software overhead, but is likely to be less accurate and have greater jitter if the system has to handle other interrupts of a higher priority.
The worst possible solution is to continuously poll the input testing for a transition from 1 to 0 and logging the time of such an occurrence and subtracting the previous time logged from that. This might be very accurate given a suitable clock source and if that is all the system has to do, but will be less deterministic and/or precise if the system has to do other work while monitoring the signal.
On the other hand, if a solution is required on a general purpose computer running Windows or Linux for example, then since these are not real-time operating systems, precision and resolution will be limited. This may not matter for slow signals and low precision requirements - so you need to specify those if you want a sensible answer.
- What is the signal frequency range?
- What is the signal mark-space ratio (or range)?
- What is the target hardware platform (chip, PC etc.)?
- How is the signal connected to the hardware (GPIO, Interrupt, Timer Input Capture, external I/O device etc)?
- What OS is running on the system (bare-metal, RTOS, GPOS etc. )?
The answer to those is germane to the most appropriate solution. It may also be helpful to know what compiler/tool-chain you are using.