The more granular the quantization is, the more accurate it will be. However, note that it requires more memory since we need to store more quantization parameters. There are different granularity when it comes to quantization. We have the per tensor quantization, but as you can see, we don't have to use the same scale and zero point for a whole tensor. We can for instance calculate a scale and the zero point for each axis. This is called per channel quantization. We could also choose a group of n elements to get the scale and zero point, and quantize each group with its own scale and zero points. For the per tensor quantization, this is what we did in the previous class. Let's refresh our mind with a simple example. Let's use the test tensor we had in the previous lab. And now this time let's perform the symmetric quantization to this tensor. So we will use the linear q symmetric function that we just coded. So we will get the quantized tensor. The scale. With this function linear_q_symmetric. And we just need to pass the test tensor. Then to have this summary we also need to dequantize it. We will use the linear quantization function we coded in the last lab. And we need to pass the quantized tensor, the scale, and the zero point. But as you remember the zero point is equal to zero for symmetry quantization. Now we have everything to plot the summary. And here we have as you can see, the quantization worked pretty well. The values are pretty close. And we get the quantization error tensor which looks pretty good. Let's have a look at the quantization error. And we get 2.5. And if you remember in the previous lab when we used asymmetric quantization, we had a quantization error around 1.5.