In a unidirectional many-to-many relationship between Registration
and Item
, where a Registration
has an ISet<Item> ItemsPurchased
and Item
has no reference back to registrations (it's not a useful way to explore the object graph), when I look at the SQL being generated, I see
INSERT INTO Registrations_Items (RegistrationId, ItemId) VALUES (@p0, @p1);@p0 = 1 [Type: Int32 (0)], @p1 = 1 [Type: Int32 (0)]
UPDATE Items SET Price = @p0, Name = @p1, [...], ListIndex = @p5, EventId = @p6 WHERE ItemId = @p7
The parameters passed to the update are correct, but nothing about the Item has changed, so the update is not needed.
Mapping is by automapping with this override in place for Registration
and no overrides for Item
. DB Schema looks completely correct. I removed all conventions and tested again and the behavior persisted, so it's not any of my mapping conventions that are doing this.
mapping.HasManyToMany(e => e.ItemsPurchased).AsSet().Cascade.All().Not.Inverse();
Why is NHibernate making this UPDATE
call and what can I do to stop it? It's not really hurting anything but it suggests that I did something wrong, so I'd like to figure out what.
Edit:
Per comment below, I created a unit test which creates an Event
(Item
must belong to an Event
), adds two Items
to it, evicts the first from session and flushes session, then Gets the first back by its ID.
I notice something odd in the SELECT items line below (2nd from bottom)
INSERT INTO Events (blah blah blah...)
select @@IDENTITY
INSERT INTO Items (Price, Name, StartDate, EndDate, ExternalID, ListIndex, EventId) VALUES (@p0, @p1, @p2, @p3, @p4, @p5, @p6);@p0 = 100.42 [Type: Decimal (0)], @p1 = 'Item 1' [Type: String (0)], @p2 = NULL [Type: DateTime (0)], @p3 = NULL [Type: DateTime (0)], @p4 = '123' [Type: String (0)], @p5 = 0 [Type: Int32 (0)], @p6 = 1 [Type: Int32 (0)]
select @@IDENTITY
SELECT blah blah blah FROM Events event0_ WHERE event0_.EventId=@p0;@p0 = 1 [Type: Int32 (0)]
SELECT itemsforsa0_.EventId as EventId1_, itemsforsa0_.ItemId as ItemId1_, itemsforsa0_.ListIndex as ListIndex1_, itemsforsa0_.ItemId as ItemId3_0_, itemsforsa0_.Price as Price3_0_, itemsforsa0_.Name as Name3_0_, itemsforsa0_.StartDate as StartDate3_0_, itemsforsa0_.EndDate as EndDate3_0_, itemsforsa0_.ExternalID as ExternalID3_0_, itemsforsa0_.ListIndex as ListIndex3_0_, itemsforsa0_.EventId as EventId3_0_ FROM Items itemsforsa0_ WHERE itemsforsa0_.EventId=@p0;@p0 = 1 [Type: Int32 (0)]
UPDATE Items SET Price = @p0, Name = @p1, StartDate = @p2, EndDate = @p3, ExternalID = @p4, ListIndex = @p5, EventId = @p6 WHERE ItemId = @p7;@p0 = 100.42000 [Type: Decimal (0)], @p1 = 'Item 1' [Type: String (0)], @p2 = NULL [Type: DateTime (0)], @p3 = NULL [Type: DateTime (0)], @p4 = '123' [Type: String (0)], @p5 = 0 [Type: Int32 (0)], @p6 = 1 [Type: Int32 (0)], @p7 = 1 [Type: Int32 (0)]
The table is created correctly:
create table Items (
ItemId INT IDENTITY NOT NULL,
Price NUMERIC(19,5) not null,
Name NVARCHAR(255) not null,
StartDate DATETIME null,
EndDate DATETIME null,
ExternalID NVARCHAR(255) not null,
ListIndex INT not null,
EventId INT not null,
primary key (ItemId)
)
The DateTimes are deliberately nullable because an item might not need to be date-specific (an example of something that is would be "early-bird registration").
This is called: Phantom Updates, it is usually related with the mapping of your objects
This is the primary cause:
Imagine we have an object like this
public class Product
{
public Guid Id { get; set; }
public int ReorderLevel { get; set; }
public decimal UnitPrice { get; set; }
}
and a map:
public class ProductMap : ClassMap<Product>
{
public ProductMap()
{
Not.LazyLoad();
Id(x => x.Id).GeneratedBy.GuidComb();
Map(x => x.ReorderLevel);
Map(x => x.UnitPrice).Not.Nullable();
}
}
Note that the ReorderLevel
will accept nulls
If you save this entity without specifying a ReorderLevel
it will be saved with a null
value, but then when you load it back from the database, since the ReorderLevel
type is int
, a 0 will be added which will cause the entity to be marked as dirty and therefore it will cause an update
These kind of errors are difficult to detect and track, I recommend you to use Nullable<>
types when you really want a null in the database
The way I usually accomplish this is to create a convention that will automatically set my Value Types
to null
if they are declared with Nullable<>
, otherwise the field will be marked as NotNullable
Just to complement, this is how my convention looks like:
mapper.BeforeMapProperty += (ins, memb, cust) =>
{
var type = memb.LocalMember.GetPropertyOrFieldType();
if (type.IsValueType)
{
if (type.IsGenericType && type.GetGenericTypeDefinition() == typeof(Nullable<>))
{
cust.Column(x => { x.NotNullable(notnull: false); });
}
else
{
cust.Column(x => { x.NotNullable(notnull: true); });
}
}
}
As noted above (below? who knows. Look for the comment I left on the other answer), I noticed that the difference between the CanGenerateDatabaseSchema
unit test and the CanGetItem
unit test was that one was giving me DECIMAL (6,2)
and the other was giving me DECIMAL (19,0)
.
I poked around more and realized that CanGenerateDatabaseSchema
was using my "real" config (from the Web project) and the other test was using my "unit test" config. My unit tests were being run against Sql Server CE ... when I changed my unit tests to use the same configuration as my real database (Sql Server 2005), suddenly the phantom update went away.
So if anyone else runs into unexpected Phantom Updates with decimals... check if you're using Sql Server CE. Since the test is actually passing (the comment that says it's failing is incorrect, it's not failing, just doing extra work), I guess I'll live with it, although why Sql CE is ignoring my configuration is a good question, and a possible NH or FNH bug.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With