I know that semantic group is define the data packages we load for key uniqueness of the data. Data with the same semantic keys will be grouped in a single data packet and it is also the key fields to the error stack.
For example: Material and Plant are the semantic keys and the data packet size is of 50k but the uniqueness values of this combination is 70k. how the data will be loaded?? will it bring 50K as the data packet size is 50k?? then what about the remaining 20k records??
May be it seems a silly question for many but please help me out to understand.