Comments
I also have a problem with that field 'quantity rounding precision' on the item UOM page.
example 1:
if base UOM = PCS and the other UOM = KG and there are 6 pcs in a KG than this is the setup
CODE QTY PER UOM
----------------------------------
PCS 1
KG 6
When I set the 'quantity rounding precision' to 1, and I purchase 3 KG which is 0.5 pcs (OK in know this is a bad example) than I will receive 1 pcs because of the rouding precision of 1.
But it doesn't work the other way around.
example 2:
if base UOM = PCS and the other UOM = KG and there are 6 Kg in a piece than this is the setup
CODE QTY PER UOM
----------------------------------
PCS 1
KG 0.16667
When I set the 'quantity rounding precision' to 1, because I want to purchase 96 KG which is 16.00032 pcs (OK in know this is a bad example) than I want to receive 16 pcs because of the rouding precision of 1.
Or I want to purchase 97 KG which is 16.16699 pcs than I also want to receive 16 pcs because of the rouding precision of 1.
But this is not possible. the field qty rounding precision only allows me to enter a 5 decimal number. 0.00001 becase 0.16667 has also 5 decimals. But I would think that this quanty rounding precision is onbly to round into the base UOM, because this is the UOM you keep inventory in. But it seems not to be the case.
This would be very useful. We have seen this requirement on two Pharma project. Pharmaceutical companies often have legal obligations to cover X months of future demand forecast as a safety stock in each month. We ended up customizing a job that creates min/max keys based on demand forecast, but it would be great if a standard solution can do this in the future.
Agreed this is a problem; it's possible to view the description within the UI but this is an impractical way to resolve errors for bulk data loads.
We should be able to download failed rows with information about why each row failed in order to perform fixes in bulk. This is especially important in large loads when records may fail for more than one reason.
It is great that you are using the profiler for investigating performance issues. We are aware of current gaps, such as more insights into actual SQL calls and outbound http call performance, but can you expand a bit on the scenarios where the profiler does not provide enough information, and the old client monitor/code coverage tools did?
For code coverage, are you aware that the snapshot debugger replay (i.e., the one used to capture a profile from VSCode) can show a quick overview of lines of code actually hit? This is somewhat similar to code coverage, although simpler. See
https://learn.microsoft.com/en-us/dynamics365-release-plan/2022wave2/smb/dynamics365-business-central/visualize-code-lines-executed-snapshot-capture
Thanks,
Peter (Microsoft)