- #1
Martin Harris
- 103
- 6
- TL;DR Summary
- Trying to optimize the condenser such as reducing the analytic error by tweaking wall temperature and condensing temperature.
Hi,
I have an attempt at a plate heat exchanger (condenser) that uses water to condenser refrigerant, as a part of a heat pump.
I have a total heat load of 12.01 kW.
My current heat load is 10 kW.
I have an analytical error on the wall temperature of about 23%, if I use Excel's Solver to minimize the error by changing the Wall temperature by multiple iterations, then the new wall temperature will decrease from 38.6 deg C to 37.81 deg C and the heat load would also decrease from 10 kW to 9.96 kW and thus have a higher thermal gradient.
If I try to minimize the analytic error of around 30% on the condensing temperature whilst the wall temperature is minimum (37.81 deg C) , this will increase the condensing temperature (from 44.3 deg C to 47.5 deg C), and the wall temperature will no longer be minimum(from 37.81 deg C to 38.74 deg C), then my heat load will increase to 12.0 kW, reaching almost the maximum heat load. It seems like my heat flux q (W/m2) depends on both wall temp and condensing temp. This is sort of the opposite of having a minimum wall temp, as it increases with the increase of condensing temp.
Any idea how to optimize this? What am I doing wrong? How should I proceed when it comes down to wall temperature and condensing temperature? Does it make more sense to have just a minimum wall temperature of 37.81 deg C and leave the condensing temperature default at 44.3 deg C), or does it make more sense to increase both?
I'm trying to minimize the analytical error, I can do it either for the wall temperature, which will yield a minimum wall temp 37.81 deg C, or I can do it to minimize the analytic error for the condensing temperature and have a condensing temp of 47.5 deg C, and hence my wall temperature will also increase to 38.74 deg C. How is it correct though? Which one of the 2 methods would make more sense? Optimizing for minimum wall temperature? Or optimizing for condensing temperature?
I have an attempt at a plate heat exchanger (condenser) that uses water to condenser refrigerant, as a part of a heat pump.
I have a total heat load of 12.01 kW.
My current heat load is 10 kW.
I have an analytical error on the wall temperature of about 23%, if I use Excel's Solver to minimize the error by changing the Wall temperature by multiple iterations, then the new wall temperature will decrease from 38.6 deg C to 37.81 deg C and the heat load would also decrease from 10 kW to 9.96 kW and thus have a higher thermal gradient.
If I try to minimize the analytic error of around 30% on the condensing temperature whilst the wall temperature is minimum (37.81 deg C) , this will increase the condensing temperature (from 44.3 deg C to 47.5 deg C), and the wall temperature will no longer be minimum(from 37.81 deg C to 38.74 deg C), then my heat load will increase to 12.0 kW, reaching almost the maximum heat load. It seems like my heat flux q (W/m2) depends on both wall temp and condensing temp. This is sort of the opposite of having a minimum wall temp, as it increases with the increase of condensing temp.
Any idea how to optimize this? What am I doing wrong? How should I proceed when it comes down to wall temperature and condensing temperature? Does it make more sense to have just a minimum wall temperature of 37.81 deg C and leave the condensing temperature default at 44.3 deg C), or does it make more sense to increase both?
I'm trying to minimize the analytical error, I can do it either for the wall temperature, which will yield a minimum wall temp 37.81 deg C, or I can do it to minimize the analytic error for the condensing temperature and have a condensing temp of 47.5 deg C, and hence my wall temperature will also increase to 38.74 deg C. How is it correct though? Which one of the 2 methods would make more sense? Optimizing for minimum wall temperature? Or optimizing for condensing temperature?
Last edited: