LiteDB
Get document count in collection
Get document count in collection using predicate filter expression
Get document count in collection using predicate filter expression
Get document count in collection using predicate filter expression
Count documents matching a query. This method does not deserialize any documents. Needs indexes on query expression
Get document count in collection using predicate filter expression
Get document count in collection
Get document count in collection using predicate filter expression
Get document count in collection using predicate filter expression
Get document count in collection using predicate filter expression
Get document count in collection using predicate filter expression
Get document count in collection using predicate filter expression
Get true if collection contains at least 1 document that satisfies the predicate expression
Get true if collection contains at least 1 document that satisfies the predicate expression
Get true if collection contains at least 1 document that satisfies the predicate expression
Get true if collection contains at least 1 document that satisfies the predicate expression
Get true if collection contains at least 1 document that satisfies the predicate expression
Returns the min value from specified key value in collection
Returns the min value of _id index
Returns the min value from specified key value in collection
Returns the max value from specified key value in collection
Returns the max _id index key value
Returns the last/max field using a linq expression
Delete a single document on collection based on _id index. Returns true if document was deleted
Delete all documents inside collection. Returns how many documents was deleted. Run inside current transaction
Delete all documents based on predicate expression. Returns how many documents was deleted
Delete all documents based on predicate expression. Returns how many documents was deleted
Delete all documents based on predicate expression. Returns how many documents was deleted
Delete all documents based on predicate expression. Returns how many documents was deleted
Return a new LiteQueryable to build more complex queries
Find documents inside a collection using predicate expression.
Find documents inside a collection using query definition.
Find documents inside a collection using predicate expression.
Find a document using Document Id. Returns null if not found.
Find the first document using predicate expression. Returns null if not found
Find the first document using predicate expression. Returns null if not found
Find the first document using predicate expression. Returns null if not found
Find the first document using predicate expression. Returns null if not found
Find the first document using defined query structure. Returns null if not found
Returns all documents inside collection order by _id index.
Run an include action in each document returned by Find(), FindById(), FindOne() and All() methods to load DbRef documents
Returns a new Collection with this action included
Run an include action in each document returned by Find(), FindById(), FindOne() and All() methods to load DbRef documents
Returns a new Collection with this action included
Create a new permanent index in all documents inside this collections if index not exists already. Returns true if index was created or false if already exits
Index name - unique name for this collection
Create a custom expression function to be indexed
If is a unique index
Create a new permanent index in all documents inside this collections if index not exists already. Returns true if index was created or false if already exits
Document field/expression
If is a unique index
Create a new permanent index in all documents inside this collections if index not exists already.
LinqExpression to be converted into BsonExpression to be indexed
Create a unique keys index?
Create a new permanent index in all documents inside this collections if index not exists already.
Index name - unique name for this collection
LinqExpression to be converted into BsonExpression to be indexed
Create a unique keys index?
Get index expression based on LINQ expression. Convert IEnumerable in MultiKey indexes
Drop index and release slot for another index
Insert a new entity to this collection. Document Id must be a new value in collection - Returns document Id
Insert a new document to this collection using passed id value.
Insert an array of new documents to this collection. Document Id must be a new value in collection. Can be set buffer size to commit at each N documents
Implements bulk insert documents in a collection. Usefull when need lots of documents.
Convert each T document in a BsonDocument, setting autoId for each one
Remove document _id if contains a "empty" value (checks for autoId bson type)
Update a document in this collection. Returns false if not found document in collection
Update a document in this collection. Returns false if not found document in collection
Update all documents
Update many documents based on transform expression. This expression must return a new document that will be replaced over current document (according with predicate).
Eg: col.UpdateMany("{ Name: UPPER($.Name), Age }", "_id > 0")
Update many document based on merge current document with extend expression. Use your class with initializers.
Eg: col.UpdateMany(x => new Customer { Name = x.Name.ToUpper(), Salary: 100 }, x => x.Name == "John")
Insert or Update a document in this collection.
Insert or Update all documents
Insert or Update a document in this collection.
Get collection name
Get collection auto id type
Getting entity mapper from current collection. Returns null if collection are BsonDocument type
Get collection name
Get collection auto id type
Getting entity mapper from current collection. Returns null if collection are BsonDocument type
Run an include action in each document returned by Find(), FindById(), FindOne() and All() methods to load DbRef documents
Returns a new Collection with this action included
Run an include action in each document returned by Find(), FindById(), FindOne() and All() methods to load DbRef documents
Returns a new Collection with this action included
Insert or Update a document in this collection.
Insert or Update all documents
Insert or Update a document in this collection.
Update a document in this collection. Returns false if not found document in collection
Update a document in this collection. Returns false if not found document in collection
Update all documents
Update many documents based on transform expression. This expression must return a new document that will be replaced over current document (according with predicate).
Eg: col.UpdateMany("{ Name: UPPER($.Name), Age }", "_id > 0")
Update many document based on merge current document with extend expression. Use your class with initializers.
Eg: col.UpdateMany(x => new Customer { Name = x.Name.ToUpper(), Salary: 100 }, x => x.Name == "John")
Insert a new entity to this collection. Document Id must be a new value in collection - Returns document Id
Insert a new document to this collection using passed id value.
Insert an array of new documents to this collection. Document Id must be a new value in collection. Can be set buffer size to commit at each N documents
Implements bulk insert documents in a collection. Usefull when need lots of documents.
Create a new permanent index in all documents inside this collections if index not exists already. Returns true if index was created or false if already exits
Index name - unique name for this collection
Create a custom expression function to be indexed
If is a unique index
Create a new permanent index in all documents inside this collections if index not exists already. Returns true if index was created or false if already exits
Document field/expression
If is a unique index
Create a new permanent index in all documents inside this collections if index not exists already.
LinqExpression to be converted into BsonExpression to be indexed
Create a unique keys index?
Create a new permanent index in all documents inside this collections if index not exists already.
Index name - unique name for this collection
LinqExpression to be converted into BsonExpression to be indexed
Create a unique keys index?
Drop index and release slot for another index
Return a new LiteQueryable to build more complex queries
Find documents inside a collection using predicate expression.
Find documents inside a collection using query definition.
Find documents inside a collection using predicate expression.
Find a document using Document Id. Returns null if not found.
Find the first document using predicate expression. Returns null if not found
Find the first document using predicate expression. Returns null if not found
Find the first document using predicate expression. Returns null if not found
Find the first document using predicate expression. Returns null if not found
Find the first document using defined query structure. Returns null if not found
Returns all documents inside collection order by _id index.
Delete a single document on collection based on _id index. Returns true if document was deleted
Delete all documents inside collection. Returns how many documents was deleted. Run inside current transaction
Delete all documents based on predicate expression. Returns how many documents was deleted
Delete all documents based on predicate expression. Returns how many documents was deleted
Delete all documents based on predicate expression. Returns how many documents was deleted
Delete all documents based on predicate expression. Returns how many documents was deleted
Get document count using property on collection.
Count documents matching a query. This method does not deserialize any document. Needs indexes on query expression
Count documents matching a query. This method does not deserialize any document. Needs indexes on query expression
Count documents matching a query. This method does not deserialize any document. Needs indexes on query expression
Count documents matching a query. This method does not deserialize any documents. Needs indexes on query expression
Count documents matching a query. This method does not deserialize any documents. Needs indexes on query expression
Get document count using property on collection.
Count documents matching a query. This method does not deserialize any documents. Needs indexes on query expression
Count documents matching a query. This method does not deserialize any documents. Needs indexes on query expression
Count documents matching a query. This method does not deserialize any documents. Needs indexes on query expression
Count documents matching a query. This method does not deserialize any documents. Needs indexes on query expression
Count documents matching a query. This method does not deserialize any documents. Needs indexes on query expression
Returns true if query returns any document. This method does not deserialize any document. Needs indexes on query expression
Returns true if query returns any document. This method does not deserialize any document. Needs indexes on query expression
Returns true if query returns any document. This method does not deserialize any document. Needs indexes on query expression
Returns true if query returns any document. This method does not deserialize any document. Needs indexes on query expression
Returns true if query returns any document. This method does not deserialize any document. Needs indexes on query expression
Returns the min value from specified key value in collection
Returns the min value of _id index
Returns the min value from specified key value in collection
Returns the max value from specified key value in collection
Returns the max _id index key value
Returns the last/max field using a linq expression
Get current instance of BsonMapper used in this database instance (can be BsonMapper.Global)
Returns a special collection for storage files/stream inside datafile. Use _files and _chunks collection names. FileId is implemented as string. Use "GetStorage" for custom options
Get a collection using a entity class as strong typed document. If collection does not exits, create a new one.
Collection name (case insensitive)
Define autoId data type (when object contains no id field)
Get a collection using a name based on typeof(T).Name (BsonMapper.ResolveCollectionName function)
Get a collection using a name based on typeof(T).Name (BsonMapper.ResolveCollectionName function)
Get a collection using a generic BsonDocument. If collection does not exits, create a new one.
Collection name (case insensitive)
Define autoId data type (when document contains no _id field)
Initialize a new transaction. Transaction are created "per-thread". There is only one single transaction per thread.
Return true if transaction was created or false if current thread already in a transaction.
Commit current transaction
Rollback current transaction
Get new instance of Storage using custom FileId type, custom "_files" collection name and custom "_chunks" collection. LiteDB support multiples file storages (using different files/chunks collection names)
Get all collections name inside this database.
Checks if a collection exists on database. Collection name is case insensitive
Drop a collection and all data + indexes
Rename a collection. Returns false if oldName does not exists or newName already exists
Execute SQL commands and return as data reader.
Execute SQL commands and return as data reader
Execute SQL commands and return as data reader
Do database checkpoint. Copy all commited transaction from log file into datafile.
Rebuild all database to remove unused pages - reduce data file
Get value from internal engine variables
Set new value to internal engine variables
Get/Set database user version - use this version number to control database change model
Get/Set database timeout - this timeout is used to wait for unlock using transactions
Get/Set if database will deserialize dates in UTC timezone or Local timezone (default: Local)
Get/Set database limit size (in bytes). New value must be equals or larger than current database size
Get/Set in how many pages (8 Kb each page) log file will auto checkpoint (copy from log file to data file). Use 0 to manual-only checkpoint (and no checkpoint on dispose)
Default: 1000 pages
Get database collection (this options can be changed only in rebuild proces)
Get database instance
Insert a new document into collection. Document Id must be a new value in collection - Returns document Id
Insert an array of new documents into collection. Document Id must be a new value in collection. Can be set buffer size to commit at each N documents
Update a document into collection. Returns false if not found document in collection
Update all documents
Insert or Update a document based on _id key. Returns true if insert entity or false if update entity
Insert or Update all documents based on _id key. Returns entity count that was inserted
Delete entity based on _id key
Delete entity based on Query
Delete entity based on predicate filter expression
Returns new instance of LiteQueryable that provides all method to query any entity inside collection. Use fluent API to apply filter/includes an than run any execute command, like ToList() or First()
Create a new permanent index in all documents inside this collections if index not exists already. Returns true if index was created or false if already exits
Index name - unique name for this collection
Create a custom expression function to be indexed
If is a unique index
Collection Name
Create a new permanent index in all documents inside this collections if index not exists already. Returns true if index was created or false if already exits
Create a custom expression function to be indexed
If is a unique index
Collection Name
Create a new permanent index in all documents inside this collections if index not exists already.
LinqExpression to be converted into BsonExpression to be indexed
Create a unique keys index?
Collection Name
Create a new permanent index in all documents inside this collections if index not exists already.
Index name - unique name for this collection
LinqExpression to be converted into BsonExpression to be indexed
Create a unique keys index?
Collection Name
Search for a single instance of T by Id. Shortcut from Query.SingleById
Execute Query[T].Where(predicate).ToList();
Execute Query[T].Where(predicate).ToList();
Execute Query[T].Where(predicate).First();
Execute Query[T].Where(predicate).First();
Execute Query[T].Where(predicate).FirstOrDefault();
Execute Query[T].Where(predicate).FirstOrDefault();
Execute Query[T].Where(predicate).Single();
Execute Query[T].Where(predicate).Single();
Execute Query[T].Where(predicate).SingleOrDefault();
Execute Query[T].Where(predicate).SingleOrDefault();
The LiteDB database. Used for create a LiteDB instance and use all storage resources. It's the database connection
Get current instance of BsonMapper used in this database instance (can be BsonMapper.Global)
Starts LiteDB database using a connection string for file system database
Starts LiteDB database using a connection string for file system database
Starts LiteDB database using a generic Stream implementation (mostly MemoryStream).
DataStream reference
BsonMapper mapper reference
LogStream reference
Start LiteDB database using a pre-exiting engine. When LiteDatabase instance dispose engine instance will be disposed too
Get a collection using an entity class as strong typed document. If collection does not exist, create a new one.
Collection name (case insensitive)
Define autoId data type (when object contains no id field)
Get a collection using a name based on typeof(T).Name (BsonMapper.ResolveCollectionName function)
Get a collection using a name based on typeof(T).Name (BsonMapper.ResolveCollectionName function)
Get a collection using a generic BsonDocument. If collection does not exist, create a new one.
Collection name (case insensitive)
Define autoId data type (when document contains no _id field)
Initialize a new transaction. Transaction are created "per-thread". There is only one single transaction per thread.
Return true if transaction was created or false if current thread already in a transaction.
Commit current transaction
Rollback current transaction
Returns a special collection for storage files/stream inside datafile. Use _files and _chunks collection names. FileId is implemented as string. Use "GetStorage" for custom options
Get new instance of Storage using custom FileId type, custom "_files" collection name and custom "_chunks" collection. LiteDB support multiples file storages (using different files/chunks collection names)
Get all collections name inside this database.
Checks if a collection exists on database. Collection name is case insensitive
Drop a collection and all data + indexes
Rename a collection. Returns false if oldName does not exists or newName already exists
Execute SQL commands and return as data reader.
Execute SQL commands and return as data reader
Execute SQL commands and return as data reader
Do database checkpoint. Copy all commited transaction from log file into datafile.
Rebuild all database to remove unused pages - reduce data file
Get value from internal engine variables
Set new value to internal engine variables
Get/Set database user version - use this version number to control database change model
Get/Set database timeout - this timeout is used to wait for unlock using transactions
Get/Set if database will deserialize dates in UTC timezone or Local timezone (default: Local)
Get/Set database limit size (in bytes). New value must be equals or larger than current database size
Get/Set in how many pages (8 Kb each page) log file will auto checkpoint (copy from log file to data file). Use 0 to manual-only checkpoint (and no checkpoint on dispose)
Default: 1000 pages
Get database collection (this options can be changed only in rebuild proces)
An IQueryable-like class to write fluent query in documents in collection.
Load cross reference documents from path expression (DbRef reference)
Load cross reference documents from path expression (DbRef reference)
Load cross reference documents from path expression (DbRef reference)
Filters a sequence of documents based on a predicate expression
Filters a sequence of documents based on a predicate expression
Filters a sequence of documents based on a predicate expression
Filters a sequence of documents based on a predicate expression
Sort the documents of resultset in ascending (or descending) order according to a key (support only one OrderBy)
Sort the documents of resultset in ascending (or descending) order according to a key (support only one OrderBy)
Sort the documents of resultset in descending order according to a key (support only one OrderBy)
Sort the documents of resultset in descending order according to a key (support only one OrderBy)
Groups the documents of resultset according to a specified key selector expression (support only one GroupBy)
Filter documents after group by pipe according to predicate expression (requires GroupBy and support only one Having)
Transform input document into a new output document. Can be used with each document, group by or all source
Project each document of resultset into a new document/value based on selector expression
Execute query locking collection in write mode. This is avoid any other thread change results after read document and before transaction ends
Bypasses a specified number of documents in resultset and retun the remaining documents (same as Skip)
Bypasses a specified number of documents in resultset and retun the remaining documents (same as Offset)
Return a specified number of contiguous documents from start of resultset
Execute query and returns resultset as generic BsonDataReader
Execute query and return resultset as IEnumerable of documents
Execute query and return resultset as IEnumerable of T. If T is a ValueType or String, return values only (not documents)
Execute query and return results as a List
Execute query and return results as an Array
Get execution plan over current query definition to see how engine will execute query
Returns the only document of resultset, and throw an exception if there not exactly one document in the sequence
Returns the only document of resultset, or null if resultset are empty; this method throw an exception if there not exactly one document in the sequence
Returns first document of resultset
Returns first document of resultset or null if resultset are empty
Execute Count methos in filter query
Execute Count methos in filter query
Returns true/false if query returns any result
The LiteDB repository pattern. A simple way to access your documents in a single class with fluent query api
Get database instance
Starts LiteDB database an existing Database instance
Starts LiteDB database using a connection string for file system database
Starts LiteDB database using a connection string for file system database
Starts LiteDB database using a Stream disk
Insert a new document into collection. Document Id must be a new value in collection - Returns document Id
Insert an array of new documents into collection. Document Id must be a new value in collection. Can be set buffer size to commit at each N documents
Update a document into collection. Returns false if not found document in collection
Update all documents
Insert or Update a document based on _id key. Returns true if insert entity or false if update entity
Insert or Update all documents based on _id key. Returns entity count that was inserted
Delete entity based on _id key
Delete entity based on Query
Delete entity based on predicate filter expression
Returns new instance of LiteQueryable that provides all method to query any entity inside collection. Use fluent API to apply filter/includes an than run any execute command, like ToList() or First()
Create a new permanent index in all documents inside this collections if index not exists already. Returns true if index was created or false if already exits
Index name - unique name for this collection
Create a custom expression function to be indexed
If is a unique index
Collection Name
Create a new permanent index in all documents inside this collections if index not exists already. Returns true if index was created or false if already exits
Create a custom expression function to be indexed
If is a unique index
Collection Name
Create a new permanent index in all documents inside this collections if index not exists already.
LinqExpression to be converted into BsonExpression to be indexed
Create a unique keys index?
Collection Name
Create a new permanent index in all documents inside this collections if index not exists already.
Index name - unique name for this collection
LinqExpression to be converted into BsonExpression to be indexed
Create a unique keys index?
Collection Name
Search for a single instance of T by Id. Shortcut from Query.SingleById
Execute Query[T].Where(predicate).ToList();
Execute Query[T].Where(predicate).ToList();
Execute Query[T].Where(predicate).First();
Execute Query[T].Where(predicate).First();
Execute Query[T].Where(predicate).FirstOrDefault();
Execute Query[T].Where(predicate).FirstOrDefault();
Execute Query[T].Where(predicate).Single();
Execute Query[T].Where(predicate).Single();
Execute Query[T].Where(predicate).SingleOrDefault();
Execute Query[T].Where(predicate).SingleOrDefault();
Indicate which constructor method will be used in this entity
Set a name to this property in BsonDocument
Indicate that property will be used as BsonDocument Id
Indicate that property will not be persist in Bson serialization
Indicate that field are not persisted inside this document but it's a reference for another document (DbRef)
Class that converts your entity class to/from BsonDocument
If you prefer use a new instance of BsonMapper (not Global), be sure cache this instance for better performance
Serialization rules:
- Classes must be "public" with a public constructor (without parameters)
- Properties must have public getter (can be read-only)
- Entity class must have Id property, [ClassName]Id property or [BsonId] attribute
- No circular references
- Fields are not valid
- IList, Array supports
- IDictionary supports (Key must be a simple datatype - converted by ChangeType)
Mapping cache between Class/BsonDocument
Map serializer/deserialize for custom types
Type instantiator function to support IoC
Type name binder to control how type names are serialized to BSON documents
Global instance used when no BsonMapper are passed in LiteDatabase ctor
A resolver name for field
Indicate that mapper do not serialize null values (default false)
Apply .Trim() in strings when serialize (default true)
Convert EmptyString to Null (default true)
Get/Set if enum must be converted into Integer value. If false, enum will be converted into String value.
MUST BE "true" to support LINQ expressions (default false)
Get/Set that mapper must include fields (default: false)
Get/Set that mapper must include non public (private, protected and internal) (default: false)
Get/Set maximum depth for nested object (default 20)
A custom callback to change MemberInfo behavior when converting to MemberMapper.
Use mapper.ResolveMember(Type entity, MemberInfo property, MemberMapper documentMappedField)
Set FieldName to null if you want remove from mapped document
Custom resolve name collection based on Type
Register a custom type serializer/deserialize function
Register a custom type serializer/deserialize function
Map your entity class to BsonDocument using fluent API
Resolve LINQ expression into BsonExpression
Resolve LINQ expression into BsonExpression (for index only)
Use lower camel case resolution for convert property names to field names
Uses lower camel case with delimiter to convert property names to field names
Get property mapper between typed .NET class and BsonDocument - Cache results
Use this method to override how your class can be, by default, mapped from entity to Bson document.
Returns an EntityMapper from each requested Type
Gets MemberInfo that refers to Id from a document object.
Returns all member that will be have mapper between POCO class to document
Get best construtor to use to initialize this entity.
- Look if contains [BsonCtor] attribute
- Look for parameterless ctor
- Look for first contructor with parameter and use BsonDocument to send RawValue
Register a property mapper as DbRef to serialize/deserialize only document reference _id
Register a property as a DbRef - implement a custom Serialize/Deserialize actions to convert entity to $id, $ref only
Register a property as a DbRefList - implement a custom Serialize/Deserialize actions to convert entity to $id, $ref only
Delegate for deserialization callback.
The BsonMapper instance that triggered the deserialization.
The target type for deserialization.
The BsonValue to be deserialized.
The deserialized BsonValue.
Gets called before deserialization of a value
Deserialize a BsonDocument to entity class
Deserialize a BsonDocument to entity class
Deserialize a BsonValue to .NET object typed in T
Deserilize a BsonValue to .NET object based on type parameter
Serialize a entity class to BsonDocument
Serialize a entity class to BsonDocument
Serialize to BsonValue any .NET object based on T type (using mapping rules)
Serialize to BsonValue any .NET object based on type parameter (using mapping rules)
Helper class to modify your entity mapping to document. Can be used instead attribute decorates
Define which property will not be mapped to document
Define a custom name for a property when mapping to document
Define which property is your document id (primary key). Define if this property supports auto-id
Define which property is your document id (primary key). Define if this property supports auto-id
Define a subdocument (or a list of) as a reference
Get a property based on a expression. Eg.: 'x => x.UserId' return string "UserId"
Class to map entity class to BsonDocument
Indicate which Type this entity mapper is
List all type members that will be mapped to/from BsonDocument
Indicate which member is _id
Get/Set a custom ctor function to create new entity instance
Resolve expression to get member mapped
Visit :: `x => x.Customer.Name`
Visit lambda invocation
Visit :: x => `x`.Customer.Name
Visit :: x => x.`Customer.Name`
Visit :: x => x.Customer.Name.`ToUpper()`
Visit :: x => x.Age + `10` (will create parameter: `p0`, `p1`, ...)
Visit :: x => `!x.Active`
Visit :: x => `new { x.Id, x.Name }`
Visit :: x => `new MyClass { Id = 10 }`
Visit :: x => `new int[] { 1, 2, 3 }`
Visit :: x => x.Id `+` 10
Visit :: x => `x.Id > 0 ? "ok" : "not-ok"`
Visit :: x => `x.FirstName ?? x.LastName`
Visit :: x => `x.Items[5]`
Resolve string pattern using an object + N arguments. Will write over _builder
Resolve Enumerable predicate when using Any/All enumerable extensions
Get string operator from an Binary expression
Returns document field name for some type member
Define if this method is index access and must eval index value (do not use parameter)
Visit expression but, if ensurePredicate = true, force expression be a predicate (appending ` = true`)
Compile and execute expression (can be cached)
Try find a Type Resolver for declaring type
Class used to test in an Expression member expression is based on parameter `x => x.Name` or variable `x => externalVar`
Internal representation for a .NET member mapped to BsonDocument
If member is Id, indicate that are AutoId
Member name
Member returns data type
Converted document field name
Delegate method to get value from entity instance
Delegate method to set value to entity instance
When used, can be define a serialization function from entity class to bson value
When used, can define a deserialization function from bson value
Is this property an DbRef? Must implement Serialize/Deserialize delegates
Indicate that this property contains an list of elements (IEnumerable)
When property is an array of items, gets underlying type (otherwise is same type of PropertyType)
Is this property ignore
Helper class to get entity properties and map as BsonValue
Using Expressions is the easy and fast way to create classes, structs, get/set fields/properties. But it not works in NET35
Create a new instance from a Type
Get a list from all acepted data type to property converter BsonValue
Get underlying get - using to get inner Type from Nullable type
Get item type from a generic List or Array
Returns true if Type is any kind of Array/IList/ICollection/....
Return if type is simple value
Returns true if Type implement ICollection (like List, HashSet)
Returns if Type is a generic Dictionary
Select member from a list of member using predicate order function to select
Get a friendly method name with parameter types
Get C# friendly primitive type names
Contains all well known vulnerable types according to ysoserial.net
Open database in safe mode
Dequeue stack and dispose database on empty stack
Internal class to parse and execute sql-like commands
BEGIN [ TRANS | TRANSACTION ]
CHECKPOINT
COMMIT [ TRANS | TRANSACTION ]
CREATE [ UNIQUE ] INDEX {indexName} ON {collection} ({indexExpr})
DELETE {collection} WHERE {whereExpr}
DROP INDEX {collection}.{indexName}
DROP COLLECTION {collection}
INSERT INTO {collection} VALUES {doc0} [, {docN}] [ WITH ID={type} ] ]
Parse :[type] for AutoId (just after collection name)
{expr0}, {expr1}, ..., {exprN}
{doc0}, {doc1}, ..., {docN} {EOF|;}
PRAGMA [DB_PARAM] = VALUE
PRAGMA [DB_PARAM]
SHRINK
RENAME COLLECTION {collection} TO {newName}
ROLLBACK [ TRANS | TRANSACTION ]
[ EXPLAIN ]
SELECT {selectExpr}
[ INTO {newcollection|$function} [ : {autoId} ] ]
[ FROM {collection|$function} ]
[ INCLUDE {pathExpr0} [, {pathExprN} ]
[ WHERE {filterExpr} ]
[ GROUP BY {groupByExpr} ]
[ HAVING {filterExpr} ]
[ ORDER BY {orderByExpr} [ ASC | DESC ] ]
[ LIMIT {number} ]
[ OFFSET {number} ]
[ FOR UPDATE ]
Read collection name and parameter (in case of system collections)
Read collection name and parameter (in case of system collections)
UPDATE - update documents - if used with {key} = {exprValue} will merge current document with this fields
if used with { key: value } will replace current document with new document
UPDATE {collection}
SET [{key} = {exprValue}, {key} = {exprValue} | { newDoc }]
[ WHERE {whereExpr} ]
Find a file inside datafile and returns LiteFileInfo instance. Returns null if not found
Find all files that match with predicate expression.
Find all files that match with predicate expression.
Find all files that match with predicate expression.
Find all files that match with predicate expression.
Find all files inside file collections
Returns if a file exisits in database
Open/Create new file storage and returns linked Stream to write operations.
Upload a file based on stream data
Upload a file based on file system data
Update metadata on a file. File must exist.
Load data inside storage and returns as Stream
Copy all file content to a steam
Copy all file content to a file
Delete a file inside datafile and all metadata related
Represents a file inside storage collection
Open file stream to read from database
Open file stream to write to database
Copy file content to another stream
Save file content to a external file
Number of bytes on each chunk document to store
Get file information
Consume all _buffer bytes and write to chunk collection
Storage is a special collection to store files and streams.
Find a file inside datafile and returns LiteFileInfo instance. Returns null if not found
Find all files that match with predicate expression.
Find all files that match with predicate expression.
Find all files that match with predicate expression.
Find all files that match with predicate expression.
Find all files inside file collections
Returns if a file exisits in database
Open/Create new file storage and returns linked Stream to write operations.
Upload a file based on stream data
Upload a file based on file system data
Update metadata on a file. File must exist.
Load data inside storage and returns as Stream
Copy all file content to a steam
Copy all file content to a file
Delete a file inside datafile and all metadata related
Manage ConnectionString to connect and create databases. Connection string are NameValue using Name1=Value1; Name2=Value2
"connection": Return how engine will be open (default: Direct)
"filename": Full path or relative path from DLL directory
"password": Database password used to encrypt/decypted data pages
"initial size": If database is new, initialize with allocated space - support KB, MB, GB (default: 0)
"readonly": Open datafile in readonly mode (default: false)
"upgrade": Check if data file is an old version and convert before open (default: false)
"auto-rebuild": If last close database exception result a invalid data state, rebuild datafile on next open (default: false)
"collation": Set default collaction when database creation (default: "[CurrentCulture]/IgnoreCase")
Initialize empty connection string
Initialize connection string parsing string in "key1=value1;key2=value2;...." format or only "filename" as default (when no ; char found)
Get value from parsed connection string. Returns null if not found
Create ILiteEngine instance according string connection parameters. For now, only Local/Shared are supported
Class is a result from optimized QueryBuild. Indicate how engine must run query - there is no more decisions to engine made, must only execute as query was defined
Represent full query options
Indicate when a query must execute in ascending order
Indicate when a query must execute in descending order
Returns all documents
Returns all documents
Returns all documents
Returns all documents that value are equals to value (=)
Returns all documents that value are less than value (<)
Returns all documents that value are less than or equals value (<=)
Returns all document that value are greater than value (>)
Returns all documents that value are greater than or equals value (>=)
Returns all document that values are between "start" and "end" values (BETWEEN)
Returns all documents that starts with value (LIKE)
Returns all documents that contains value (CONTAINS) - string Contains
Returns all documents that are not equals to value (not equals)
Returns all documents that has value in values list (IN)
Returns all documents that has value in values list (IN)
Returns all documents that has value in values list (IN)
Get all operands to works with array or enumerable values
Returns document that exists in BOTH queries results. If both queries has indexes, left query has index preference (other side will be run in full scan)
Returns document that exists in ALL queries results.
Returns documents that exists in ANY queries results (Union).
Returns document that exists in ANY queries results (Union).
[ EXPLAIN ]
SELECT {selectExpr}
[ INTO {newcollection|$function} [ : {autoId} ] ]
[ FROM {collection|$function} ]
[ INCLUDE {pathExpr0} [, {pathExprN} ]
[ WHERE {filterExpr} ]
[ GROUP BY {groupByExpr} ]
[ HAVING {filterExpr} ]
[ ORDER BY {orderByExpr} [ ASC | DESC ] ]
[ LIMIT {number} ]
[ OFFSET {number} ]
[ FOR UPDATE ]
Returns all documents for which at least one value in arrayFields is equal to value
Returns all documents for which at least one value in arrayFields are less tha to value (<)
Returns all documents for which at least one value in arrayFields are less than or equals value (<=)
Returns all documents for which at least one value in arrayFields are greater than value (>)
Returns all documents for which at least one value in arrayFields are greater than or equals value (>=)
Returns all documents for which at least one value in arrayFields are between "start" and "end" values (BETWEEN)
Returns all documents for which at least one value in arrayFields starts with value (LIKE)
Returns all documents for which at least one value in arrayFields are not equals to value (not equals)
All supported BsonTypes supported in AutoId insert operation
Get/Set position of this document inside database. It's filled when used in Find operation.
Get/Set a field for document. Fields are case sensitive
Get all document elements - Return "_id" as first of all (if exists)
All supported BsonTypes in sort order
Represent a Bson Value used in BsonDocument
Represent a Null bson type
Represent a MinValue bson type
Represent a MaxValue bson type
Create a new document used in DbRef => { $id: id, $ref: collection }
Indicate BsonType of this BsonValue
Get internal .NET value object
Get/Set a field for document. Fields are case sensitive - Works only when value are document
Get/Set value in array position. Works only when value are array
Returns how many bytes this BsonValue will consume when converted into binary BSON
If recalc = false, use cached length value (from Array/Document only)
Get how many bytes one single element will used in BSON format
Class to call method for convert BsonDocument to/from byte[] - based on http://bsonspec.org/spec.html
In v5 this class use new BufferRead/Writer to work with byte[] segments. This class are just a shortchut
Serialize BsonDocument into a binary array
Deserialize binary data into BsonDocument
Class to read void, one or a collection of BsonValues. Used in SQL execution commands and query returns. Use local data source (IEnumerable[BsonDocument])
Initialize with no value
Initialize with a single value
Initialize with an IEnumerable data source
Return if has any value in result
Return current value
Return collection name
Move cursor to next result. Returns true if read was possible
Implement some Enumerable methods to IBsonDataReader
Delegate function to get compiled enumerable expression
Delegate function to get compiled scalar expression
Compile and execute string expressions using BsonDocuments. Used in all document manipulation (transform, filter, indexes, updates). See https://github.com/mbdavid/LiteDB/wiki/Expressions
Get formatted expression
Indicate expression type
If true, this expression do not change if same document/paramter are passed (only few methods change - like NOW() - or parameters)
Get/Set parameter values that will be used on expression execution
In predicate expressions, indicate Left side
In predicate expressions, indicate Rigth side
Get/Set this expression (or any inner expression) use global Source (*)
Get transformed LINQ expression
Fill this hashset with all fields used in root level of document (be used to partial deserialize) - "$" means all fields
Indicate if this expressions returns a single value or IEnumerable value
Indicate that expression evaluate to TRUE or FALSE (=, >, ...). OR and AND are not considered Predicate expressions
Predicate expressions must have Left/Right expressions
This expression can be indexed? To index some expression must contains fields (at least 1) and
must use only immutable methods and no parameters
This expression has no dependency of BsonDocument so can be used as user value (when select index)
Indicate when predicate expression uses ANY keywork for filter array items
Compiled Expression into a function to be executed: func(source[], root, current, parameters)[]
Compiled Expression into a scalar function to be executed: func(source[], root, current, parameters)1
Get default field name when need convert simple BsonValue into BsonDocument
Only internal ctor (from BsonParserExpression)
Implicit string converter
Implicit string converter
Execute expression with an empty document (used only for resolve math/functions).
Execute expression and returns IEnumerable values
Execute expression and returns IEnumerable values
Execute expression and returns IEnumerable values - returns NULL if no elements
Execute expression over document to get all index keys.
Return distinct value (no duplicate key to same document)
Execute scalar expression with an blank document and empty source (used only for resolve math/functions).
Execute scalar expression over single document and return a single value (or BsonNull when empty). Throws exception if expression are not scalar expression
Execute scalar expression over multiple documents and return a single value (or BsonNull when empty). Throws exception if expression are not scalar expression
Execute expression and returns IEnumerable values - returns NULL if no elements
Parse string and create new instance of BsonExpression - can be cached
Parse string and create new instance of BsonExpression - can be cached
Parse string and create new instance of BsonExpression - can be cached
Parse tokenizer and create new instance of BsonExpression - for now, do not use cache
Parse and compile string expression and return BsonExpression
Set same parameter referente to all expression child (left, right)
Get root document $ expression
Get all registered methods for BsonExpressions
Load all static methods from BsonExpressionMethods class. Use a dictionary using name + parameter count
Get expression method with same name and same parameter - return null if not found
Get all registered functions for BsonExpressions
Load all static methods from BsonExpressionFunctions class. Use a dictionary using name + parameter count
Get expression function with same name and same parameter - return null if not found
Count all values. Return a single value
Find minimal value from all values (number values only). Return a single value
Find max value from all values (number values only). Return a single value
Returns first value from an list of values (scan all source)
Returns last value from an list of values
Find average value from all values (number values only). Return a single value
Sum all values (number values only). Return a single value
Return "true" if inner collection contains any result
ANY($.items[*])
Return a new instance of MINVALUE
Create a new OBJECTID value
Create a new GUID value
Return a new DATETIME (Now)
Return a new DATETIME (UtcNow)
Return a new DATETIME (Today)
Return a new instance of MAXVALUE
Convert values into INT32. Returns empty if not possible to convert
Convert values into INT64. Returns empty if not possible to convert
Convert values into DOUBLE. Returns empty if not possible to convert
Convert values into DOUBLE. Returns empty if not possible to convert
Convert values into DECIMAL. Returns empty if not possible to convert
Convert values into DECIMAL. Returns empty if not possible to convert
Convert value into STRING
Return an array from list of values. Support multiple values but returns a single value
Return an binary from string (base64) values
Convert values into OBJECTID. Returns empty if not possible to convert
Convert values into GUID. Returns empty if not possible to convert
Return converted value into BOOLEAN value
Convert values into DATETIME. Returns empty if not possible to convert
Convert values into DATETIME. Returns empty if not possible to convert. Support custom culture info
Convert values into DATETIME. Returns empty if not possible to convert
Convert values into DATETIME. Returns empty if not possible to convert
Create a new instance of DATETIME based on year, month, day (local time)
Create a new instance of DATETIME based on year, month, day (UTC)
Return true if value is MINVALUE
Return true if value is NULL
Return true if value is INT32
Return true if value is INT64
Return true if value is DOUBLE
Return true if value is DECIMAL
Return true if value is NUMBER (int, double, decimal)
Return true if value is STRING
Return true if value is DOCUMENT
Return true if value is ARRAY
Return true if value is BINARY
Return true if value is OBJECTID
Return true if value is GUID
Return true if value is BOOLEAN
Return true if value is DATETIME
Return true if value is DATE (alias to DATETIME)
Alias to INT32(values)
Alias to INT64(values)
Alias to BOOLEAN(values)
Alias to DATETIME(values) and DATETIME_UTC(values)
Alias to IS_INT32(values)
Alias to IS_INT64(values)
Alias to IS_BOOLEAN(values)
Alias to IS_DATE(values)
Get year from date
Get month from date
Get day from date
Get hour from date
Get minute from date
Get seconds from date
Add an interval to date. Use dateInterval: "y" (or "year"), "M" (or "month"), "d" (or "day"), "h" (or "hour"), "m" (or "minute"), "s" or ("second")
Returns an interval about 2 dates. Use dateInterval: "y|year", "M|month", "d|day", "h|hour", "m|minute", "s|second"
Convert UTC date into LOCAL date
Convert LOCAL date into UTC date
Apply absolute value (ABS) method in all number values
Round number method in all number values
Implement POWER (x and y)
Parse a JSON string into a new BsonValue
JSON('{a:1}') = {a:1}
Create a new document and copy all properties from source document. Then copy properties (overritting if need) extend document
Always returns a new document!
EXTEND($, {a: 2}) = {_id:1, a: 2}
Convert an array into IEnuemrable of values - If not array, returns as single yield value
ITEMS([1, 2, null]) = 1, 2, null
Concatenates 2 sequences into a new single sequence
Get all KEYS names from a document
Get all values from a document
Return CreationTime from ObjectId value - returns null if not an ObjectId
Conditional IF statment. If condition are true, returns TRUE value, otherwise, FALSE value
Return first values if not null. If null, returns second value.
Return length of variant value (valid only for String, Binary, Array or Document [keys])
Returns the first num elements of values.
Returns the union of the two enumerables.
Returns the set difference between the two enumerables.
Returns a unique list of items
Return a random int value
Return a ranom int value inside this min/max values
Return lower case from string value
Return UPPER case from string value
Apply Left TRIM (start) from string value
Apply Right TRIM (end) from string value
Apply TRIM from string value
Reports the zero-based index of the first occurrence of the specified string in this instance
Reports the zero-based index of the first occurrence of the specified string in this instance
Returns substring from string value using index and length (zero-based)
Returns substring from string value using index and length (zero-based)
Returns replaced string changing oldValue with newValue
Return value string with left padding
Return value string with right padding
Slit value string based on separator
Slit value string based on regular expression pattern
Return format value string using format definition (same as String.Format("{0:~}", values)).
Join all values into a single string with ',' separator.
Join all values into a single string with a string separator
Test if value is match with regular expression pattern
Apply regular expression pattern over value to get group data. Return null if not found
When a method are decorated with this attribute means that this method are not immutable
Add two number values. If any side are string, concat left+right as string
Minus two number values
Multiply two number values
Divide two number values
Mod two number values
Test if left and right are same value. Returns true or false
Test if left is greater than right value. Returns true or false
Test if left is greater or equals than right value. Returns true or false
Test if left is less than right value. Returns true or false
Test if left is less or equals than right value. Returns true or false
Test if left and right are not same value. Returns true or false
Test if left is "SQL LIKE" with right. Returns true or false. Works only when left and right are string
Test if left is between right-array. Returns true or false. Right value must be an array. Support multiple values
Test if left are in any value in right side (when right side is an array). If right side is not an array, just implement a simple Equals (=). Returns true or false
Returns value from root document (used in parameter). Returns same document if name are empty
Return a value from a value as document. If has no name, just return values ($). If value are not a document, do not return anything
Returns a single value from array according index or expression parameter
Returns all values from array according filter expression or all values (index = MaxValue)
Create multi documents based on key-value pairs on parameters. DOCUMENT('_id', 1)
Return an array from list of values. Support multiple values but returns a single value
Compile and execute simple expressions using BsonDocuments. Used in indexes and updates operations. See https://github.com/mbdavid/LiteDB/wiki/Expressions
Operation definition by methods with defined expression type (operators are in precedence order)
Start parse string into linq expression. Read path, function or base type bson values (int, double, bool, string)
Start parse string into linq expression. Read path, function or base type bson values (int, double, bool, string)
Parse a document builder syntax used in SELECT statment: {expr0} [AS] [{alias}], {expr1} [AS] [{alias}], ...
Parse a document builder syntax used in UPDATE statment:
{key0} = {expr0}, .... will be converted into { key: [expr], ... }
{key: value} ... return return a new document
Try parse double number - return null if not double token
Try parse int number - return null if not int token
Try parse bool - return null if not bool token
Try parse null constant - return null if not null token
Try parse string with both single/double quote - return null if not string
Try parse json document - return null if not document token
Try parse source documents (when passed) * - return null if not source token
Try parse array - return null if not array token
Try parse parameter - return null if not parameter token
Try parse inner expression - return null if not bracket token
Try parse method call - return null if not method call
Parse JSON-Path - return null if not method call
Implement a JSON-Path like navigation on BsonDocument. Support a simple range of paths
Try parse FUNCTION methods: MAP, FILTER, SORT, ...
Parse expression functions, like MAP, FILTER or SORT.
MAP(items[*] => @.Name)
Create an array expression with 2 values (used only in BETWEEN statement)
Get field from simple \w regex or ['comp-lex'] - also, add into source. Can read empty field (root)
Read key in document definition with single word or "comp-lex"
Read next token as Operant with ANY|ALL keyword before - returns null if next token are not an operant
Convert scalar expression into enumerable expression using ITEMS(...) method
Append [*] to path or ITEMS(..) in all others
Convert enumerable expression into array using ARRAY(...) method
Create new logic (AND/OR) expression based in 2 expressions
Create new conditional (IIF) expression. Execute expression only if True or False value
A class that read a json string using a tokenizer (without regex)
Static class for serialize/deserialize BsonDocuments into json extended format
Json serialize a BsonValue into a String
Json serialize a BsonValue into a TextWriter
Json serialize a BsonValue into a StringBuilder
Deserialize a Json string into a BsonValue
Deserialize a Json TextReader into a BsonValue
Deserialize a json array as an IEnumerable of BsonValue
Deserialize a json array as an IEnumerable of BsonValue reading on demand TextReader
Get/Set indent size
Get/Set if writer must print pretty (with new line/indent)
Serialize value into text writer
Represent a 12-bytes BSON type used in document Id
A zero 12-bytes ObjectId
Get timestamp
Get machine number
Get pid number
Get increment
Get creation time
Initializes a new empty instance of the ObjectId class.
Initializes a new instance of the ObjectId class from ObjectId vars.
Initializes a new instance of ObjectId class from another ObjectId.
Initializes a new instance of the ObjectId class from hex string.
Initializes a new instance of the ObjectId class from byte array.
Convert hex value string in byte array
Checks if this ObjectId is equal to the given object. Returns true
if the given object is equal to the value of this instance.
Returns false otherwise.
Determines whether the specified object is equal to this instance.
Returns a hash code for this instance.
Compares two instances of ObjectId
Represent ObjectId as 12 bytes array
Creates a new ObjectId.
Memory file reader - must call Dipose after use to return reader into pool
This class is not ThreadSafe - must have 1 instance per thread (get instance from DiskService)
Read bytes from stream into buffer slice
Request for a empty, writable non-linked page (same as DiskService.NewPage)
When dispose, return stream to pool
Implement custom fast/in memory mapped disk access
[ThreadSafe]
Get memory cache instance
Create a new empty database (use synced mode)
Get a new instance for read data/log pages. This instance are not thread-safe - must request 1 per thread (used in Transaction)
This method calculates the maximum number of items (documents or IndexNodes) that this database can have.
The result is used to prevent infinite loops in case of problems with pointers
Each page support max of 255 items. Use 10 pages offset (avoid empty disk)
When a page are requested as Writable but not saved in disk, must be discard before release
Discard pages that contains valid data and was not modified
Request for a empty, writable non-linked page.
Write all pages inside log file in a thread safe operation
Get file length based on data/log length variables (no direct on disk)
Mark a file with a single signal to next open do auto-rebuild. Used only when closing database (after close files)
Read all database pages inside file with no cache using. PageBuffers dont need to be Released
Write pages DIRECT in disk. This pages are not cached and are not shared - WORKS FOR DATA FILE ONLY
Set new length for file in sync mode. Queue must be empty before set length
Get file name (or Stream name)
Manage linear memory segments to avoid re-creating array buffer in heap memory
Do not share same memory store with different files
[ThreadSafe]
Contains free ready-to-use pages in memory
- All pages here MUST have ShareCounter = 0
- All pages here MUST have Position = MaxValue
Contains only clean pages (from both data/log file) - support page concurrency use
- MUST have defined Origin and Position
- Contains only 1 instance per Position/Origin
- Contains only pages with ShareCounter >= 0
* = 0 - Page is available but is not in use by anyone (can be moved into _free list on next Extend())
* >= 1 - Page is in use by 1 or more threads. Page must run "Release" when finished using
Get how many extends were made in this store
Get memory segment sizes
Get page from clean cache (readable). If page doesn't exist, create this new page and load data using factory fn
Get unique position in dictionary according with origin. Use positive/negative values
Request for a writable page - no other can read this page and this page has no reference
Writable pages can be MoveToReadable() or DiscardWritable() - but never Released()
Create new page using an empty buffer block. Mark this page as writable.
Create new page using an empty buffer block. Mark this page as writable.
Try to move this page to readable list (if not already in readable list)
Returns true if it was moved
Move a writable page to readable list - if already exists, override content
Used after write operation that must mark page as readable because page content was changed
This method runs BEFORE send to write disk queue - but new page request must read this new content
Returns readable page
Completely discard a writable page - clean content and move to free list
Get a clean, re-usable page from store. Can extend buffer segments if store is empty
Check if it's possible move readable pages to free list - if not possible, extend memory
Return how many pages are in use when call this method (ShareCounter != 0).
Return how many pages are available (completely free)
Return how many segments are already loaded in memory
Get how many pages this cache extends in memory
Get how many pages are used as Writable at this moment
Get all readable pages
Clean all cache memory - moving back all readable pages into free list
This command must be called inside an exclusive lock
Read multiple array segment as a single linear segment - Forward Only
Current global cursor position
Indicate position are at end of last source array segment
Move forward in current segment. If array segment finishes, open next segment
Returns true if moved to another segment - returns false if continues in the same segment
Read bytes from source and copy into buffer. Return how many bytes was read
Skip bytes (same as Read but with no array copy)
Consume all data source until finish
Read string with fixed size
Reading string until find \0 at end
Try read CString in current segment avoind read byte-to-byte over segments
Read DateTime as UTC ticks (not BSON format)
Read Guid as 16 bytes array
Write ObjectId as 12 bytes array
Write a boolean as 1 byte (0 or 1)
Write single byte
Write PageAddress as PageID, Index
Read byte array - not great because need create new array instance
Read single IndexKey (BsonValue) from buffer. Use +1 length only for string/binary
Read a BsonDocument from reader
Read an BsonArray from reader
Reads an element (key-value) from an reader
Write data types/BSON data into byte[]. It's forward only and support multi buffer slice as source
Current global cursor position
Indicate position are at end of last source array segment
Move forward in current segment. If array segment finish, open next segment
Returns true if move to another segment - returns false if continue in same segment
Write bytes from buffer into segmentsr. Return how many bytes was write
Write bytes from buffer into segmentsr. Return how many bytes was write
Skip bytes (same as Write but with no array copy)
Consume all data source until finish
Write String with \0 at end
Write string into output buffer.
Support direct string (with no length information) or BSON specs: with (legnth + 1) [4 bytes] before and '\0' at end = 5 extra bytes
Write DateTime as UTC ticks (not BSON format)
Write Guid as 16 bytes array
Write ObjectId as 12 bytes array
Write a boolean as 1 byte (0 or 1)
Write single byte
Write PageAddress as PageID, Index
Write BsonArray as BSON specs. Returns array bytes count
Write BsonDocument as BSON specs. Returns document bytes count
FileStream disk implementation of disk factory
[ThreadSafe]
Get data filename
Create new data file FileStream instance based on filename
Get file length using FileInfo. Crop file length if not length % PAGE_SIZE
Check if file exists (without open it)
Delete file (must all stream be closed)
Test if this file are locked by another process
Close all stream on end
Interface factory to provider new Stream instances for datafile/walfile resources. It's useful to multiple threads can read same datafile
Get Stream name (filename)
Get new Stream instance
Get file length
Checks if file exists
Delete physical file on disk
Test if this file are used by another process
Indicate that factory must be dispose on finish
Simple Stream disk implementation of disk factory - used for Memory/Temp database
[ThreadSafe]
Stream has no name (use stream type)
Use ConcurrentStream wrapper to support multi thread in same Stream (using lock control)
Get file length using _stream.Length
Check if file exists based on stream length
There is no delete method in Stream factory
Test if this file are locked by another process (there is no way to test when Stream only)
Do no dispose on finish
Manage multiple open readonly Stream instances from same source (file).
Support single writer instance
Close all Stream on dispose
[ThreadSafe]
Get single Stream writer instance
Rent a Stream reader instance
After use, return Stream reader instance
Close all Stream instances (readers/writer)
Encrypted AES Stream
Decrypt data from Stream
Encrypt data to Stream
Get new salt for encryption
Implement internal thread-safe Stream using lock control - A single instance of ConcurrentStream are not multi thread,
but multiples ConcurrentStream instances using same stream base will support concurrency
Implement a temporary stream that uses MemoryStream until get LIMIT bytes, then copy all to tempoary disk file and delete on dispose
Can be pass
Indicate that stream are all in memory
Indicate that stream is now on this
Get temp disk filename (if null will be generate only when create file)
Internal database pragmas persisted inside header page
Internal user version control to detect database changes
Define collation for this database. Value will be persisted on disk at first write database. After this, there is no change of collation
Timeout for waiting unlock operations (default: 1 minute)
Max limit of datafile (in bytes) (default: MaxValue)
Returns date in UTC timezone from BSON deserialization (default: false == LocalTime)
When LOG file gets larger than checkpoint size (in pages), do a soft checkpoint (and also do a checkpoint at shutdown)
Checkpoint = 0 means there's no auto-checkpoint nor shutdown checkpoint
Get all pragmas
All engine settings used to starts new engine
Get/Set custom stream to be used as datafile (can be MemoryStream or TempStream). Do not use FileStream - to use physical file, use "filename" attribute (and keep DataStream/WalStream null)
Get/Set custom stream to be used as log file. If is null, use a new TempStream (for TempStream datafile) or MemoryStream (for MemoryStream datafile)
Get/Set custom stream to be used as temp file. If is null, will create new FileStreamFactory with "-tmp" on name
Full path or relative path from DLL directory. Can use ':temp:' for temp database or ':memory:' for in-memory database. (default: null)
Get database password to decrypt pages
If database is new, initialize with allocated space (in bytes) (default: 0)
Create database with custom string collection (used only to create database) (default: Collation.Default)
Indicate that engine will open files in readonly mode (and will not support any database change)
After a Close with exception do a database rebuild on next open
If detect it's a older version (v4) do upgrade in datafile to new v5. A backup file will be keeped in same directory
Is used to transform a from the database on read. This can be used to upgrade data from older versions.
Create new IStreamFactory for datafile
Create new IStreamFactory for logfile
Create new IStreamFactory for temporary file (sort)
A public class that take care of all engine data structure access - it´s basic implementation of a NoSql database
Its isolated from complete solution - works on low level only (no linq, no poco... just BSON objects)
[ThreadSafe]
Returns all collection inside datafile
Drop collection including all documents, indexes and extended pages (do not support transactions)
Rename a collection (do not support transactions)
Implements delete based on IDs enumerable
Implements delete based on filter expression
Create a new index (or do nothing if already exists) to a collection/field
Drop an index from a collection
Insert all documents in collection. If document has no _id, use AutoId generation.
Internal implementation of insert a document
Get engine internal pragma value
Set engine pragma new value (some pragmas will be affected only after realod)
Run query over collection using a query definition.
Returns a new IBsonDataReader that run and return first document result (open transaction)
Implement a full rebuild database. Engine will be closed and re-created in another instance.
A backup copy will be created with -backup extention. All data will be readed and re created in another database
After run, will re-open database
Implement a full rebuild database. A backup copy will be created with -backup extention. All data will be readed and re created in another database
Fill current database with data inside file reader - run inside a transacion
Recovery datafile using a rebuild process. Run only on "Open" database
Get lastest value from a _id collection and plus 1 - use _sequence cache
Update sequence number with new _id passed by user, IF this number are higher than current last _id
At this point, newId.Type is Number
Get last _id index key from collection. Returns MinValue if collection are empty
Get registered system collection
Register a new system collection that can be used in query for input/output data
Collection name must starts with $
Register a new system collection that can be used in query for input data
Collection name must starts with $
Initialize a new transaction. Transaction are created "per-thread". There is only one single transaction per thread.
Return true if transaction was created or false if current thread already in a transaction.
Persist all dirty pages into LOG file
Do rollback to current transaction. Clear dirty pages in memory and return new pages to main empty linked-list
Create (or reuse) a transaction an add try/catch block. Commit transaction if is new transaction
Implement update command to a document inside a collection. Return number of documents updated
Update documents using transform expression (must return a scalar/document value) using predicate as filter
Implement internal update document
If Upgrade=true, run this before open Disk service
Upgrade old version of LiteDB into new LiteDB file structure. Returns true if database was completed converted
If database already in current version just return false
Implement upsert command to documents in a collection. Calls update on all documents,
then any documents not updated are then attempted to insert.
This will have the side effect of throwing if duplicate items are attempted to be inserted.
All system read-only collections for get metadata database information
Sequence cache for collections last ID (for int/long numbers only)
Initialize LiteEngine using connection memory database
Initialize LiteEngine using connection string using key=value; parser
Initialize LiteEngine using initial engine settings
Normal close process:
- Stop any new transaction
- Stop operation loops over database (throw in SafePoint)
- Wait for writer queue
- Close disks
- Clean variables
Exception close database:
- Stop diskQueue
- Stop any disk read/write (dispose)
- Dispose sort disk
- Dispose locker
- Checks Exception type for INVALID_DATAFILE_STATE to auto rebuild on open
Run checkpoint command to copy log file into data file
Register all internal system collections avaiable by default
Internal class to read old LiteDB v4 database version (datafile v7 structure)
Check header slots to test if data file is a LiteDB FILE_VERSION = v7
Read all collection based on header page
Read all indexes from all collection pages
Get all document using an indexInfo as start point (_id index).
Read all database pages from v7 structure into a flexible BsonDocument - only read what really needs
Read extend data block
Visit all index pages by starting index page. Get a list with all index pages from a collection
Internal class to read all datafile documents - use only Stream - no cache system. Read log file (read commited transtraction)
Open data file and log file, read header and collection pages
Read all pragma values
Read all collection based on header page
Read all indexes from all collection pages (except _id index)
Read all documents from current collection with NO index use - read direct from free lists
There is no document order
Load all pragmas from header page
Read all file (and log) to find all data pages (and store groupby colPageID)
Load all collections from header OR via all data-pages ColID
Load all indexes for all collections
Check header slots to test if data file is a LiteDB FILE_VERSION = v8
Load log file to build index map (wal map index)
Read page from data/log stream (checks in logIndexMap file/position). Capture any exception here, but don't call HandleError
Handle any error avoiding throw exceptions during process. If exception must stop process (ioexceptions), throw exception
Add errors to log and continue reading data file
Interface to read current or old datafile structure - Used to shirnk/upgrade datafile from old LiteDB versions
Open and initialize file reader (run before any other command)
Get all database pragma variables
Get all collections name from database
Get all indexes from collection (except _id index)
Get all documents from a collection
Bytes used in each offset slot (to store segment position (2) + length (2))
Represent page number - start in 0 with HeaderPage [4 bytes]
Indicate the page type [1 byte]
Represent the previous page. Used for page-sequences - MaxValue represent that has NO previous page [4 bytes]
Represent the next page. Used for page-sequences - MaxValue represent that has NO next page [4 bytes]
Get/Set where this page are in free list slot [1 byte]
Used only in DataPage (0-4) and IndexPage (0-1) - when new or not used: 255
DataPage: 0 (7344 - 8160 free space) - 1 (6120 - 7343) - 2 (4896 - 6119) - 3 (2448 - 4895) - 4 (0 - 2447)
IndexPage 0 (1400 - 8160 free bytes) - 1 (0 - 1399 bytes free)
Indicate how many items are used inside this page [1 byte]
Get how many bytes are used on content area (exclude header and footer blocks) [2 bytes]
Get how many bytes are fragmented inside this page (free blocks inside used blocks) [2 bytes]
Get next free position. Starts with 32 (first byte after header) - There is no fragmentation after this [2 bytes]
Get last (highest) used index slot - use byte.MaxValue for empty [1 byte]
Get how many free bytes (including fragmented bytes) are in this page (content space) - Will return 0 bytes if page are full (or with max 255 items)
Get how many bytes are used in footer page at this moment
((HighestIndex + 1) * 4 bytes per slot: [2 for position, 2 for length])
Set in all datafile pages the page id about data/index collection. Useful if want re-build database without any index [4 bytes]
Represent transaction ID that was stored [4 bytes]
Used in WAL, define this page is last transaction page and are confirmed on disk [1 byte]
Set this pages that was changed and must be persist in disk [not peristable]
Get page buffer instance
Create new Page based on pre-defined PageID and PageType
Read header data from byte[] buffer into local variables
using fixed position be be faster than use BufferReader
Write header data from variable into byte[] buffer. When override, call base.UpdateBuffer() after write your code
Change current page to Empty page - fix variables and buffer (DO NOT change PageID)
Get a page segment item based on index slot
Get a new page segment for this length content
Get a new page segment for this length content using fixed index
Remove index slot about this page segment
Update segment bytes with new data. Current page must have bytes enougth for this new size. Index will not be changed
Update will try use same segment to store. If not possible, write on end of page (with possible Defrag operation)
Defrag method re-organize all byte data content removing all fragmented data. This will move all page segments
to create a single continuous content area (just after header area). No index segment will be changed (only positions)
Store start index used in GetFreeIndex to avoid always run full loop over all indexes
Get a free index slot in this page
Get all used slots indexes in this page
Update HighestIndex based on current HighestIndex (step back looking for next used slot)
Used only in Delete() operation
Checks if segment position has a valid value (used for DEBUG)
Checks if segment length has a valid value (used for DEBUG)
Get buffer offset position where one page segment length are located (based on index slot)
Get buffer offset position where one page segment length are located (based on index slot)
Returns a size of specified number of pages
Returns a size of specified number of pages
Create new page instance based on buffer (READ)
Create new page instance with new PageID and passed buffer (NEW)
Free data page linked-list (N lists for different range of FreeBlocks)
All indexes references for this collection
Get PK index
Get index from index name (index name is case sensitive) - returns null if not found
Get all indexes in this collection page
Get all collections array based on slot number
Insert new index inside this collection page
Return index instance and mark as updatable
Remove index reference in this page
The DataPage thats stores object data.
Read existing DataPage in buffer
Create new DataPage
Get single DataBlock
Insert new DataBlock. Use extend to indicate document sequence (document are large than PAGE_SIZE)
Update current block returning data block to be fill
Delete single data block inside this page
Get all block positions inside this page that are not extend blocks (initial data block)
FreeBytes ranges on page slot for free list page
90% - 100% = 0 (7344 - 8160)
75% - 90% = 1 (6120 - 7343)
60% - 75% = 2 (4896 - 6119)
30% - 60% = 3 (2448 - 4895)
0% - 30% = 4 (0000 - 2447)
Returns the slot the page should be in, given the it has
A slot number between 0 and 4
Returns the slot where there is a page with enough space for bytes of data.
Returns -1 if no space guaranteed (more than 90% of a DataPage net size)
A slot number between -1 and 3
Header page represent first page on datafile. Engine contains a single instance of HeaderPage and all changes
must be synchronized (using lock).
Header info the validate that datafile is a LiteDB file (27 bytes)
Datafile specification version
Get/Set the pageID that start sequence with a complete empty pages (can be used as a new page) [4 bytes]
Last created page - Used when there is no free page inside file [4 bytes]
DateTime when database was created [8 bytes]
Get database pragmas instance class
All collections names/link pointers are stored inside this document
Check if collections was changed
Create new Header Page
Load HeaderPage from buffer page
Load page content based on page buffer
Create a save point before do any change on header page (execute UpdateBuffer())
Restore savepoint content and override on page. Must run in lock(_header)
Get collection PageID - return uint.MaxValue if not exists
Get all collections with pageID
Insert new collection in header
Remove existing collection reference in header
Rename collection with new name
Get how many bytes are available in collection to store new collections
The IndexPage thats stores object data.
Read existing IndexPage in buffer
Create new IndexPage
Read single IndexNode
Insert new IndexNode. After call this, "node" instance can't be changed
Delete index node based on page index
Get all index nodes inside this page
Get page index slot on FreeIndexPageID
8160 - 600 : Slot #0
599 - 0 : Slot #1 (no page in list)
Class that implement higher level of index search operations (equals, greater, less, ...)
Index name
Get/Set index order
Calculate cost based on type/value/collection - Lower is best (1)
Abstract method that must be implement for index seek/scan - Returns IndexNodes that match with index
Find witch index will be used and run Execute method
Return all index nodes
Implement equals index operation =
Implement IN index operation. Value must be an array
Implement range operation - in asc or desc way - can be used as LT, LTE, GT, GTE too because support MinValue/MaxValue
Execute an "index scan" passing a Func as where
Implement virtual index for system collections AND full data collection read
Implement basic document loader based on data service/bson reader
Interface for abstract document lookup that can be direct from datafile or by virtual collections
Implement lookup based only in index Key
Abstract class with workflow method to be used in pipeline implementation
Abstract method to be implement according pipe workflow
INCLUDE: Do include in result document according path expression - Works only with DocumentLookup
WHERE: Filter document according expression. Expression must be an Bool result
ORDER BY: Sort documents according orderby expression and order asc/desc
Implement an IEnumerable document cache that read data first time and store in memory/disk cache
Used in GroupBy operation and MUST read all IEnumerable source before dispose because are need be linear from main resultset
Implement query using GroupBy expression
GroupBy Pipe Order
- LoadDocument
- Filter
- OrderBy (to GroupBy)
- GroupBy
- HavingSelectGroupBy
- OffSet
- Limit
GROUP BY: Apply groupBy expression and aggregate results in DocumentGroup
YieldDocuments will run over all key-ordered source and returns groups of source
Run Select expression over a group source - each group will return a single value
If contains Having expression, test if result = true before run Select
Basic query pipe workflow - support filter, includes and orderby
Query Pipe order
- LoadDocument
- IncludeBefore
- Filter
- OrderBy
- OffSet
- Limit
- IncludeAfter
- Select
Pipe: Transaform final result appling expressin transform. Can return document or simple values
Pipe: Run select expression over all recordset
Class that execute QueryPlan returing results
Run query definition into engine. Execute optimization to get query planner
Execute query and insert result into another collection. Support external collections
Class that optimize query transforming user "Query" into "QueryPlan"
Build QueryPlan instance based on QueryBuilder fields
- Load used fields in all expressions
- Select best index option
- Fill includes
- Define orderBy
- Define groupBy
Fill terms from where predicate list
Do some pre-defined optimization on terms to convert expensive filter in indexable filter
Load all fields that must be deserialize from document.
Try select index based on lowest cost or GroupBy/OrderBy reuse - use this priority order:
- Get lowest index cost used in WHERE expressions (will filter data)
- If there is no candidate, try get:
- Same of GroupBy
- Same of OrderBy
- Prefered single-field (when no lookup neeed)
Define OrderBy optimization (try re-use index)
Define GroupBy optimization (try re-use index)
Will define each include to be run BEFORE where (worst) OR AFTER where (best)
Represent an GroupBy definition (is based on OrderByDefinition)
Calculate index cost based on expression/collection index.
Lower cost is better - lowest will be selected
Get filtered expression: "$._id = 10"
Get index expression only: "$._id"
Get created Index instance used on query
Create index based on expression predicate
Represent an OrderBy definition
This class are result from optimization from QueryBuild in QueryAnalyzer. Indicate how engine must run query - there is no more decisions to engine made, must only execute as query was defined
Contains used index and estimate cost to run
Get collection name (required)
Index used on query (required)
Index expression that will be used in index (source only)
Get index cost (lower is best)
If true, gereate document result only with IndexNode.Key (avoid load all document)
List of filters of documents
List of includes must be done BEFORE filter (it's not optimized but some filter will use this include)
List of includes must be done AFTER filter (it's optimized because will include result only)
Expression to order by resultset
Expression to group by document results
Transaformation data before return - if null there is no transform (return document)
Get fields name that will be deserialize from disk
Limit resultset
Skip documents before returns
Indicate this query is for update (lock mode = Write)
Select corrent pipe
Get corrent IDocumentLookup
Get detail about execution plan for this query definition
Represent a Select expression
Check collection name if is valid (and fit on header)
Throw correct message error if not valid name or not fit on header page
Get collection page instance (or create a new one). Returns true if a new collection was created
Add a new collection. Check if name the not exists. Create only in transaction page - will update header only in commit
Get maximum data bytes[] that fit in 1 page = 8150
Insert BsonDocument into new data pages
Update document using same page position as reference
Get all buffer slices that address block contains. Need use BufferReader to read document
Delete all datablock that contains a document (can use multiples data blocks)
Implement a Index service - Add/Remove index nodes on SkipList
Based on: http://igoro.com/archive/skip-lists-are-fascinating/
Create a new index and returns head page address (skip list)
Insert a new node index inside an collection index. Flip coin to know level
Insert a new node index inside an collection index.
Flip coin (skipped list): returns how many levels the node will have (starts in 1, max of INDEX_MAX_LEVELS)
Get a node inside a page using PageAddress - Returns null if address IsEmpty
Gets all node list from passed nodeAddress (forward only)
Deletes all indexes nodes from pkNode
Deletes all list of nodes in toDelete - fix single linked-list and return last non-delete node
Delete a single index node - fix tree double-linked list levels
Delete all index nodes from a specific collection index. Scan over all PK nodes, read all nodes list and remove
Return all index nodes from an index
Find first node that index match with value .
If index are unique, return unique value - if index are not unique, return first found (can start, middle or end)
If not found but sibling = true and key are not found, returns next value index node (if order = Asc) or prev node (if order = Desc)
Lock service are collection-based locks. Lock will support any threads reading at same time. Writing operations will be locked
based on collection. Eventualy, write operation can change header page that has an exclusive locker for.
[ThreadSafe]
Return if current thread have open transaction
Return how many transactions are opened
Enter transaction read lock - should be called just before enter a new transaction
Exit transaction read lock
Enter collection write lock mode (only 1 collection per time can have this lock)
Exit collection in reserved lock
Enter all database in exclusive lock. Wait for all transactions finish. In exclusive mode no one can enter in new transaction (for read/write)
If current thread already in exclusive mode, returns false
Try enter in exclusive mode - if not possible, just exit with false (do not wait and no exceptions)
If mustExit returns true, must call ExitExclusive after use
Exit exclusive lock
[ThreadSafe]
Read first 16kb (2 PAGES) in bytes
Represent a single snapshot
Get all snapshot pages (can or not include collectionPage) - If included, will be last page
Clear all local pages and return page buffer to file reader. Do not release CollectionPage (only in Dispose method)
Dispose stream readers and exit collection lock
Get a valid page for this snapshot (must consider local-index and wal-index)
Get a valid page for this snapshot (must consider local-index and wal-index)
Read page from disk (dirty, wal or data)
Returns a page that contains space enough to data to insert new object - if one does not exit, creates a new page.
Before return page, fix empty free list slot according with passed length
Get a index page with space enouth for a new index node
Get a new empty page from disk: can be a reused page (from header free list) or file extend
Never re-use page from same transaction
Add/Remove a data page from free list slots
Add/Remove a index page from single free list
Add page into double linked-list (always add as first element)
Remove a page from double linked list.
Delete a page - this page will be marked as Empty page
There is no re-use deleted page in same transaction - deleted pages will be in another linked list and will
be part of Header free list page only in commit
Delete current collection and all pages - this snapshot can't be used after this
This class monitor all open transactions to manage memory usage for each transaction
[Singleton - ThreadSafe]
Release current thread transaction
Get transaction from current thread (from thread slot or from queryOnly) - do not created new transaction
Used only in SystemCollections to get running query transaction
Get initial transaction size - get from free pages or reducing from all open transactions
Try extend max transaction size in passed transaction ONLY if contains free pages available
Check if transaction size reach limit AND check if is possible extend this limit
Dispose all open transactions
Represent a single transaction service. Need a new instance for each transaction.
You must run each transaction in a different thread - no 2 transaction in same thread (locks as per-thread)
Get/Set how many open cursor this transaction are running
Get/Set if this transaction was opened by BeginTrans() method (not by AutoTransaction/Cursor)
Finalizer: Will be called once a thread is closed. The TransactionMonitor._slot releases the used TransactionService.
Create (or get from transaction-cache) snapshot and return
If current transaction contains too much pages, now is safe to remove clean pages from memory and flush to wal disk dirty pages
Persist all dirty in-memory pages (in all snapshots) and clear local pages list (even clean pages)
Write pages into disk and confirm transaction in wal-index. Returns true if any dirty page was updated
After commit, all snapshot are closed
Rollback transaction operation - ignore all modified pages and return new pages into disk
After rollback, all snapshot are closed
Return added pages when occurs an rollback transaction (run this only in rollback). Create new transactionID and add into
Log file all new pages as EmptyPage in a linked order - also, update SharedPage before store
Public implementation of Dispose pattern.
Do all WAL index services based on LOG file - has only single instance per engine
[Singleton - ThreadSafe]
Store last used transaction ID
Get current read version for all new transactions
Get current counter for transaction ID
Clear WAL index links and cache memory. Used after checkpoint and rebuild rollback
Get new transactionID in thread safe way
Checks if a Page/Version are in WAL-index memory. Consider version that are below parameter. Returns PagePosition of this page inside WAL-file or Empty if page doesn't found.
Add transactionID in confirmed list and update WAL index with all pages positions
Load all confirmed transactions from log file (used only when open datafile)
Don't need lock because it's called on ctor of LiteEngine
Do checkpoint operation to copy log pages into data file. Return how many transactions was commited inside data file
Checkpoint requires exclusive lock database
Run checkpoint only if there is no open transactions
Do checkpoint operation to copy log pages into data file. Return how many transactions was commited inside data file
Checkpoint requires exclusive lock database
If soft = true, just try enter in exclusive mode - if not possible, just exit (don't execute checkpoint)
Returns if current container has no more items to read
Get current/last read value in container
Get container disk position
Get how many keyValues this container contains
Initialize reader based on Stream (if data was persisted in disk) or Buffer (if all data fit in only 1 container)
Get 8k buffer slices inside file container
Single instance of TempDisk manage read/write access to temporary disk - used in merge sort
[ThreadSafe]
Get a new reader stream from pool. Must return after use
Return used open reader stream to be reused in next sort
Return used disk container position to be reused in next sort
Get next avaiable disk position - can be a new extend file or reuse container slot
Use thread safe classes to ensure multiple threads access at same time
Write buffer container data into disk
Service to implement merge sort, in disk, to run ORDER BY command.
[ThreadSafe]
Get how many documents was inserted by Insert method
Expose used container in this sort operation
Read all input items and store in temp disk ordered in each container
Slipt all items in big sorted containers - Do merge sort with all containers
Split values in many IEnumerable. Each enumerable contains values to be insert in a single container
Loop in values enumerator to return N values for a single container
Slot index [0-255] used in all index nodes
Indicate index type: 0 = SkipList (reserved for future use)
Index name
Get index expression (path or expr)
Get BsonExpression from Expression
Indicate if this index has distinct values only
Head page address for this index
A link pointer to tail node
Reserved byte (old max level)
Free index page linked-list (all pages here must have at least 600 bytes)
Returns if this index slot is empty and can be used as new index
Get index collection size used in CollectionPage
Get index collection size used in CollectionPage
Represent a single query featching data from engine
Get fixed part of DataBlock (6 bytes)
Position block inside page
Indicate if this data block is first block (false) or extend block (true)
If document need more than 1 block, use this link to next block
Document buffer slice
Read new DataBlock from filled page segment
Create new DataBlock and fill into buffer
Simple parameter class to be passed into IEnumerable classes loop ("ref" do not works)
There is no origin (new page)
Data file
Log file (-log)
Represent a index node inside a Index Page
Fixed length of IndexNode (12 bytes)
Position of this node inside a IndexPage (not persist)
Index slot reference in CollectionIndex [1 byte]
Skip-list levels (array-size) (1-32) - [1 byte]
The object value that was indexed (max 255 bytes value)
Reference for a datablock address
Single linked-list for all nodes from a single document [5 bytes]
Link to prev value (used in skip lists - Prev.Length = Next.Length) [5 bytes]
Link to next value (used in skip lists - Prev.Length = Next.Length)
Get index page reference
Calculate how many bytes this node will need on page segment
Get how many bytes will be used to store this value. Must consider:
[1 byte] - BsonType
[1 byte] - KeyLength (used only in String|Byte[])
[N bytes] - BsonValue in bytes (0-254)
Read index node from page segment (lazy-load)
Create new index node and persist into page segment
Create a fake index node used only in Virtual Index runner
Update NextNode pointer (update in buffer too). Also, set page as dirty
Update Prev[index] pointer (update in buffer too). Also, set page as dirty
Update Next[index] pointer (update in buffer too). Also, set page as dirty
Returns Next (order == 1) OR Prev (order == -1)
Represents a snapshot lock mode
Read only snap with read lock
Read/Write snapshot with reserved lock
Represents a page address inside a page structure - index could be byte offset position OR index in a list (6 bytes)
PageID (4 bytes)
Page Segment index inside page (1 bytes)
Returns true if this PageAdress is empty value
Represent page buffer to be read/write using FileMemory
Get, on initialize, a unique ID in all database instance for this PageBufer. Is a simple global incremented counter
Get/Set page position. If page are writable, this postion CAN be MaxValue (has not defined position yet)
Get/Set page bytes origin (data/log)
Get/Set how many read-share threads are using this page. -1 means 1 thread are using as writable
Get/Set timestamp from last request
Release this page - decrement ShareCounter
Represents a page position after save in disk. Used in WAL files where PageID do not match with PagePosition
PageID (4 bytes)
Position in disk
Checks if current PagePosition is empty value
Represent a single internal engine variable that user can read/change
A random BuildID identifier
Rebuild database with a new password
Define a new collation when rebuild
When set true, if any problem occurs in rebuild, a _rebuild_errors collection
will contains all errors found
After run rebuild process, get a error report (empty if no error detected)
Get a list of errors during rebuild process
Represent a simple structure to store added/removed pages in a transaction. One instance per transaction
[SingleThread]
Get how many pages are involved in this transaction across all snapshots - Will be clear when get MAX_TRANSACTION_SIZE
Contains all dirty pages already persist in LOG file (used in all snapshots). Store in [uint, PagePosition] to reuse same method in save pages into log and get saved page positions on log
Handle created pages during transaction (for rollback) - Is a list because order is important
First deleted pageID
Last deleted pageID
Get deleted page count
Callback function to modify header page on commit
Run Commit event
Detect if this transaction will need persist header page (has added/deleted pages or added/deleted collections)
This class implement $query experimental system function to run sub-queries. It's experimental only - possible not be present in final release
Implement a simple system collection with input data only (to use Output must inherit this class)
Get system collection name (must starts with $)
Get input data source factory
Get output data source factory (must implement in inherit class)
Static helper to read options arg as plain value or as document fields
Static helper to read options arg as plain value or as document fields
Encryption AES wrapper to encrypt data pages
Encrypt byte array returning new encrypted byte array with same length of original array (PAGE_SIZE)
Decrypt and byte array returning a new byte array
Hash a password using SHA1 just to verify password
Generate a salt key that will be stored inside first page database
Internal class to deserialize a byte[] into a BsonDocument using BSON data format
Main method - deserialize using ByteReader helper
Read a BsonDocument from reader
Read an BsonArray from reader
Reads an element (key-value) from an reader
Read BSON string add \0x00 at and of string and add this char in length before
Async implementation of ManualResetEvent
https://devblogs.microsoft.com/pfxteam/building-async-coordination-primitives-part-1-asyncmanualresetevent/
Internal class that implement same idea from ArraySegment[byte] but use a class (not a struct). Works for byte[] only
Clear all page content byte array (not controls)
Clear page content byte array
Fill all content with value. Used for DEBUG propost
Checks if all values contains only value parameter (used for DEBUG)
Return byte[] slice into hex digits
Slice this buffer into new BufferSlice according new offset and new count
Convert this buffer slice into new byte[]
Implement how database will compare to order by/find strings according defined culture/compare options
If not set, default is CurrentCulture with IgnoreCase
Get LCID code from culture
Get database language culture
Get options to how string should be compared in sort
Compare 2 string values using current culture/compare options
Class with all constants used in LiteDB + Debbuger HELPER
The size of each page in disk - use 8192 as all major databases
Header page size
Bytes used in encryption salt
Define ShareCounter buffer as writable
Define index name max length
Max level used on skip list (index).
Max size of a index entry - usde for string, binary, array and documents. Need fit in 1 byte length
Get max length of 1 single index node
Get how many slots collection pages will have for free list page (data/index)
Document limit size - 2048 data pages limit (about 16Mb - same size as MongoDB)
Using 2047 because first/last page can contain less than 8150 bytes.
Define how many transactions can be open simultaneously
Define how many pages all transaction will consume, in memory, before persist in disk. This amount are shared across all open transactions
100,000 ~= 1Gb memory
Size, in PAGES, for each buffer array (used in MemoryStore)
It's an array to increase after each extend - limited in highest value
Each byte array will be created with this size * PAGE_SIZE
Use minimal 12 to allocate at least 85Kb per segment (will use LOH)
Define how many documents will be keep in memory until clear cache and remove support to orderby/groupby
Define how many bytes each merge sort container will be created
Initial seed for Random
Log a message using Debug.WriteLine
Log a message using Debug.WriteLine only if conditional = true
Ensure condition is true, otherwise throw exception (check contract)
If ifTest are true, ensure condition is true, otherwise throw ensure exception (check contract)
Ensure condition is true, otherwise throw exception (runs only in DEBUG mode)
Class to help extend IndexNode key up to 1023 bytes length (for string/byte[]) using 2 first bits in BsonType
Read BsonType and UShort length from 2 bytes
Write BsonType and UShort length in 2 bytes
Very fast way to check if all byte array is full of zero
Fill all array with defined value
Read UTF8 string until found \0
Copy Int16 bytes direct into buffer
Copy Int32 bytes direct into buffer
Copy Int64 bytes direct into buffer
Copy UInt16 bytes direct into buffer
Copy UInt32 bytes direct into buffer
Copy Int64 bytes direct into buffer
Copy Single bytes direct into buffer
Copy Double bytes direct into buffer
Read string with \0 on end. Returns full string length (including \0 char)
Read any BsonValue. Use 1 byte for data type, 1 byte for length (optional), 0-255 bytes to value.
For document or array, use BufferReader
Wrtie any BsonValue. Use 1 byte for data type, 1 byte for length (optional), 0-255 bytes to value.
For document or array, use BufferWriter
Truncate DateTime in milliseconds
Get value from dictionary converting datatype T
Get a value from a key converted in file size format: "1gb", "10 mb", "80000"
Get Path (better ToString) from an Expression.
Support multi levels: x => x.Customer.Address
Support list levels: x => x.Addresses.Select(z => z.StreetName)
Detect if exception is an Locked exception
Wait current thread for N milliseconds if exception is about Locking
Return same IEnumerable but indicate if item last item in enumerable
If Stream are FileStream, flush content direct to disk (avoid OS cache)
Test if string is simple word pattern ([a-Z$_])
Implement SqlLike in C# string - based on
https://stackoverflow.com/a/8583383/3286260
I remove support for [ and ] to avoid missing close brackets
Get first string before any `%` or `_` ... used to index startswith - out if has more string pattern after found wildcard
A simple file helper tool with static methods
Create a temp filename based on original filename - checks if file exists (if exists, append counter number)
Get LOG file based on data file
Get TEMP file based on data file
Test if file are used by any process
Try execute some action while has lock exception
Try execute some action while has lock exception. If timeout occurs, throw last exception
Convert storage unit string "1gb", "10 mb", "80000" to long bytes
Format a long file length to pretty file unit
Get CultureInfo object from LCID code (not avaiable in .net standard 1.3)
Get current system operation LCID culture
The main exception for LiteDB
Critical error should be stop engine and release data files and all memory allocation
Convert filename to mimetype (http://stackoverflow.com/questions/1029740/get-mime-type-from-filename-extension)
A singleton shared randomizer class
Implement a generic result structure with value and exception. This value can be partial value (like BsonDocument/Array)
Get array result or throw exception if there is any error on read result
ASCII char names: https://www.ascii.cl/htmlcodes.htm
{
}
[
]
(
)
,
:
;
@
#
~
.
&
$
!
!=
=
>
>=
<
<=
-
+
*
/
\
%
"..." or '...'
[0-9]+
[0-9]+.[0-9]
\n\r\t \u0032
[a-Z_$]+[a-Z0-9_$]
Represent a single string token
Expect if token is type (if not, throw UnexpectedToken)
Expect for type1 OR type2 (if not, throw UnexpectedToken)
Expect for type1 OR type2 OR type3 (if not, throw UnexpectedToken)
Class to tokenize TextReader input used in JsonRead/BsonExpressions
This class are not thread safe
If EOF throw an invalid token exception (used in while()) otherwise return "false" (not EOF)
Checks if char is an valid part of a word [a-Z_]+[a-Z0-9_$]*
Read next char in stream and set in _current
Look for next token but keeps in buffer when run "ReadToken()" again.
Read next token (or from ahead buffer).
Read next token from reader
Eat all whitespace - used before a valid token
Read a word (word = [\w$]+)
Read a number - it's accepts all number char, but not validate. When run Convert, .NET will check if number is correct
Read a string removing open and close " or '
Read all chars to end of LINE