I just want to check if there is quicker way using LINQ to have list removed from duplicates by id, but in result list item will have sum of some other property (in this case Price). For example:
Start list:
List<Item> a = new List<Item>
{
new Item {Id = 1, Name = "Item1", Code = "IT00001", Price = 100},
new Item {Id = 2, Name = "Item2", Code = "IT00002", Price = 200},
new Item {Id = 3, Name = "Item3", Code = "IT00003", Price = 150},
new Item {Id = 1, Name = "Item1", Code = "IT00001", Price = 100},
new Item {Id = 3, Name = "Item3", Code = "IT00003", Price = 150},
new Item {Id = 3, Name = "Item3", Code = "IT00004", Price = 250}
};
And result list would be:
List<Item> a = new List<Item>
{
new Item {Id = 1, Name = "Item1", Code = "IT00001", Price = 200},
new Item {Id = 2, Name = "Item2", Code = "IT00002", Price = 200},
new Item {Id = 3, Name = "Item3", Code = "IT00003", Price = 550}
};
In (functional) LINQ it is something like:
List<Item> b = a
.GroupBy(x => x.Id)
.Select(x => new Item { Id = x.Key, Name = x.First().Name, Code = x.First().Code, Price = x.Sum(y => y.Price) })
.ToList();
In keyword-based LINQ it is something like:
List<Item> c = (from x in a
group x by x.Id into y
select new Item { Id = y.Key, Name = y.First().Name, Code = y.First().Code, Price = y.Sum(z => z.Price) }
).ToList();
var filteredList = a.GroupBy(e => e.Id).Select(g =>
{
var item = g.First();
return new Item
{
Id = item.Id,
Name = item.Name,
Code = item.Code,
Price = g.Sum(e => e.Price)
};
}).ToList();
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With