I've been asked to make a fuel (gasoline) level sensor and I've been experimenting with the FDC1004 EVM and a homemade TIDA-00317 sensor.
For my first test, I taped the sensor to a thin plastic container and filled it with water. It worked very well for that, ranging from about 3pF when empty to nearly 12pF when full.
For the next test, I taped the sensor to a standard red gas can, about half full of gas. I figured I could tip the gas can back and forth to simulate changing fuel levels - though obviously the thickness of fuel in front of the sensor would be uneven as a result of tipping it compared with actually filling and emptying it.
I can see a difference between full and empty, but not much - it looks like about 0.85pF change from full to empty with the channel gains set to 1, or about 2pf with the gains set to 3. The sensitivity is far far lower for this than it was for the water test and it doesn't look like there is enough range to be useful.
I see that the dielectric constant of water is about 80 and for gasoline it's only 2. Is this the primary reason for the difference in performance? I know the plastic of this container is thicker than what I used for water and that my tipping-the-container method is not ideal, but it seems like both of those effects should be relatively minor.
Do you have any advice for increasing the sensitivity to make an acceptable sensor for gas levels? We can make a new sensor if needed. It looks like making it wider will increase the sensitivity, but I don't know how much wider it needs to be to perform well.
Is this OOP technique and this sensor the right solution for fuel level sensing - or is there something better? An in-tank design would be acceptable, if capacitive sensing from outside the tank just isn't the right fit for fuel.
Thanks,
Glen