This page is just a bunch of ideas and is not meant to be accessible yet.

I hate database programming: I hate to have to build a database schema thinking in terms of database structures, then having to think about my programmes in terms of objects and classes and write all this code to interface them together.

There has to be a better and more productive way to build general database applications.
Of course, there will always be a need to hand-craft databases, optimise everything, especially when dealing with large databases where performance is crucial, but most of the time, the database is just a way to persist your business logic and you really don't want to have to care so much about how it is done and you certainly don't want to spend all that time writing glue code.
See Object-Relational Impedance Mismatch for a more complete description of the issues.

Here come the OR Mappers or Object Relational Mappers. They are frameworks that take care of the nitty-gritty bits and make your life much easier when developing applications: you think in terms of objects, classes, business logic, and the ORM will take care of mapping those to an existing or an automatically generated database schema.

It's a high level of abstraction that sits above the data access layer of your application. As such, most ORM have a performance impact as data has to cross that additional layer, but the benefits are many, for you as a developer and for maintaining your application.

Reflections on XPO and XPCollections.

Issues I see with XPCollections:

  • XPCollections are simple non-intelligent collections of persistent objects. They behave like containers which can be sorted and filtered, but they do not have a strong link to the objects they contain.
  • XPCollections do not know if the content of one of the objects they hold has been modified.
  • XPCollections are not aware of the creation of objects outside of them: these object are not added to a collection automatically even if they match the type and criteria of the collection.
  • XPCollections are not aware if an object has been deleted in another collection or deleted outside of them. Objects must be explicitly Removed from the collection.

It seems to me that XPCollection is trying to solve 2 problems with one stone:

  • Simple dumb collections of persistent objects, just for the sake of having a way to regroup objects when you need.
  • More complex, live collections of objects that should be able to maintain a list of objects that match the criteria of the collection, like the type, sorting and filtering.

Sometimes, what I want is a Collection to be aware that I created an object in another part of the application that should really be automatically visible in the Collection. Same with deleted objects; they should be removed automatically from the collection.
To avoid issues with deleting objects that are currently being edited, we can add a persisted flag that tells us the state of the object and disallow anyone form editing or deleting that object until it is released. That means there should of course be a way to reset those flags should they remain stuck, after a crash for instance.

Integrity check detection

This leads to data integrity needing to be checked at regular intervals, for instance whenever the application is connected to the database, it keeps a running counter in a OpenSessions table representing the user connected and a random session number generated each time the application starts.
When the application exits cleanly, the record in the OpenSession table is removed.
Whenever a record for a connecting user exists in the table, then we know we did not exit cleanly and must run an integrity check.

In a multiuser environment, a heartbeat field could be added to the OpenSessions table and updated every few seconds with the current time (UTC). This would allow a reconnecting client to assess if the database is currently in use or not by other users. If any of these counters is stale, then we can assume that other users were disconnected uncleanly and an integrity check should be performed.

Integrity check details

Once an integrity issue has been detected, a flag must be set to disallow any client connection to the database until its integrity is verified.

Integrity verification will check and reset editing flags in each record and can perform other needed tasks, like purging deleted records, checking referential integrity, verifying some known data entry issues, like adherence to company rules, etc.

Design by N.Design Studio, adapted by (version 1.0.0)
Powered by pmwiki-2.2.0-beta65