My understanding of the extension of MongoDB
The data storage format of mongodb is due to the fact that the document structure of MongoDB is BJSON format (BJSON full name: Binary JSON), and the BJSON format itself supports saving data in binary format, so the data in binary format of the file can be directly saved in the document structure of MongoDB Under the coercion of the blogger, I would like to make a small sharing for everyone. I hereby declare that the blogger’s character is very simple. It is just a threat, and there is absolutely no temptation. Due to the lack of relevant Chinese information on mongodb, I will continue to share it with you if I have the opportunity. I hope this little sharing can bring you something to gain. Closer to home, please read on. Why do you say “same old acquaintance”, because mongodb’s data storage format is a BJSON format (full name of BJSON: BinaryJSON), and the BJSON format itself supports saving data in binary format, so the file’s Data in binary format is saved directly into MongoDB’s document structure. MongoDB is composed of three levels (database), collection (collection), and document object (document). Correspondence with relational database: Relational database MongoDB database Database…
The method process of MongoDB database to solve the integer problem
The integer problem mentioned in this article is actually not a problem of MongoDB, but a problem of PHP driver: MongoDB itself has two integer types, namely: 32-bit integer and 64-bit integer, but the old version of the PHP driver does not care whether the operating system is 32 The bit is still 64 bits, and all integers are treated as 32-bit integers, resulting in 64-bit integers being truncated. In order to solve this problem while maintaining compatibility as much as possible, the new version of the PHP driver has added the mongo.native-long option, in order to treat integers as 64-bit in 64-bit operating systems. If you are interested, please refer to: 64- bit integers in MongoDB. So does the PHP driver really solve the integer problem completely? NO! There are still bugs when dealing with group operations: To illustrate the problem, let’s first generate some test data: <?php ini_set(‘mongo.native_long’, 1); $instance = new Mongo(); $instance = $instance->selectCollection(‘test’, ‘test’); for ($i = 0; $i < 10; $i++) { $instance->insert(array( ‘group_id’ => rand(1, 5), ‘count’ => rand(1, 5), )); } ?> Let’s use the group operation to group by group_id and calculate the count: <?php ini_set(‘mongo.native_long’, 1); $instance = new Mongo();…
Operate MongoDB database GridFS under PHP to store files
<?php //Initialize gridfs $conn = new Mongo(); //Connect to MongoDB $db = $conn->photos; //select database $grid = $db->getGridFS(); // get gridfs object $grid = $db->getGridFS(‘file’); //Get the file gridfs object //gridfs has three ways to store files //The first type of direct storage file $id = $grid->storeFile(“./logo.png”); $id = $grid->put(‘./logo.png’, array(‘ower’=>’myname’)); //storeFile has the same effect as put, followed by additional parameters, which are saved in the files file //The second storage file binary stream $data = get_file_contents(“./logo.png”); $id = $grid->storeBytes($data,array(“parame”=>’Additional parameters will be stored with the picture’)); //The third way to save the file $_FILES submitted directly by the form $id = $grid->storeUpload(‘upfile’); // equivalent to $id = $grid->storeFile($_FILES[‘upfile’][‘tmp_name’]); //————–The above is to save the picture–begin to read the picture below—————- //Return $id = md5 string after saving successfully $logo = $grid->findOne(array(‘_id’=>$id)); //Use _id as the index to get the file, or directly file name header(‘Content-type: image/png’); // output image header echo $logo ->getBytes(); // output data stream $grid->remove() delete file $grid->delete() also deletes files, but only the _id of the file can be passed $grid->drop() clears all data $grid->find() and findOne() directly search for files and return IDs, find() does not add parameters, and directly returns all file…
Summary of common operations on MongoDB data in PHP
This article mainly summarizes and introduces the common operations of MongoDB in PHP in detail. Friends who need it can come and refer to it. I hope it will be helpful to everyone. $mongodb = new Mongo(); //$cOnnection= new Mongo( “$dburl:$port” ); // connect to a remote host (default port) $mydb = $mongodb->mydb; // implicitly create database mydb $mydb = $mongodb->selectDB(“mydb”); //Select the existing database directly $collection = $mydb->mycollect; //Select the collection used, if it does not exist, it will be created automatically $collection = $db->selectCollection(‘mydb’); // only select, not create //insert a new record $collection->insert(array(“name”=>”l4yn3”, “age”=>”10”, “sex”:”unknown”)); //Modify record $where = array(“name”=>”l4yn3”); $update_item = array(‘$set’=>array(“age”=>”15”, “sex”:”secret”)); $collection->update($where, $update_item); $options[‘multiple’] = true; //The default is false, whether to change the matching multiple lines $collection->update($where, $update_item, $options); //Query the records $myinfo = $collection->findOne(array(“name”=>”l4yn3”)); $myinfo = $collection->findOne(array(“name”=> “l4yn3”), array(“age”=>”15”)); //Search by condition: $query = array(“name”=>”l4yn3”); $cursor = $collection->find($query); //Find documents satisfying $query in the $collectio collection while($cursor->hasNext()) { var_dump($cursor->getNext()); // returns the array } //Return the number of document records $collection->count(); //Delete a database: $connection->dropDB(“…”); //List all available databases: $m->listDBs(); //No return value //Close the connection: $connection->close(); Various parameter methods for connecting to the mongodb database in php //Connect to localhost:27017 $cOnn…
Analysis of the use of MySQL and MongoDB replication cluster fragmentation
Distributed database computing involves requirements such as distributed transactions, data distribution, and data convergence calculations. Distributed databases can achieve high security, high performance, and high availability, but of course they also bring high costs (fixed costs and operating costs). We use MongoDB and MySQL Cluster analyzes the design ideas from the implementation point of view to abstract some of the design methods that we can refer to when designing the database and apply them to our production system. First talk about the characteristics of relational and non-relational databases: MySQL’s Innodb and Cluster have complete ACID properties: A Atomicity The entire transaction will either be completed or rolled back as a whole. B Consistency Before the transaction starts and after the transaction ends, the integrity restrictions of the database are not violated. C Isolation The execution of two transactions does not interfere with each other, and the time of two transactions will not affect each other. D Persistence After the transaction is completed, the changes made by the transaction to the database are persistently saved in the database and are complete. In order to realize ACID, implementations such as Undo, Redo, MVCC, TAS, signals, two-phase blockade, two-phase commit, and blockade are…
Data query performance test comparison between MySQL and MongoDB
After testing the batch insert yesterday, let’s test the read today. Due to many test items, I use my local computer to test: sempron2300+, 2G memory, xp 32-bit. For this test, I will take 1, 10, 20, 50, 100, 1000, 5000, 10000, 100000, 200000 as reference values Test condition 1: select id,a1,a2 from a order by id desc limit x Test condition 2: select id,a1,a2 from a where id>100000 order by id desc limit 1 Test condition 3: select id,a1,a2 from a where id>100000 order by id desc limit 200000, x select id,a1,a2 from a where id=’100000’… Test condition 6: select id,a1,a2 from a where id>’100000′ and id<'100050' order by id desc Based on the above data: mysql reads data faster than mongodb under the restriction of sorting query conditions Under the condition of reading a large result set, mysql reads data faster than mongodb; but when reading a small amount of data, mysql reads data slower than mongodb If you specify where, mysql reads data a little faster than mongodb, but as more and more where, mysql reads data faster than mongodb the the
Basic statement usage method of MongoDB database
Query: MySQL: SELECT * FROM user Mongo: db. user. find() Conditional query: MySQL: SELECT * FROM user WHERE name = ’starlee’ Mongo: db.user.find({‘name’ : ’starlee’}) Insert: MySQL: INSERT INOT user (`name`, `age`) values (‘starlee’,25) Mongo: db.user.insert({‘name’ : ’starlee’, ‘age’ : 25}) If you want to add a field in MySQL, you must: ALTER TABLE user…. But in MongoDB you just need: db.user.insert({‘name’ : ‘starlee’, ‘age’ : 25, ’email’ : ‘ [email protected]’}) Delete: MySQL: DELETE * FROM user Mongo: db.user.remove({}) Conditional deletion: MySQL: DELETE FROM user WHERE age < 30 Mongo: db.user.remove({‘age’ : {$lt : 30}}) MongoDBSize comparison symbols in : $gt : > $gte : >= $lt : < $ lte : <= $ne : != UPDATEUpdate: MySQL: UPDATE user SET `age` = 36 WHERE `name` = ’starlee’ Mongo: db.user.update({‘name’ : ’starlee’}, {$set : {‘age’ : 36}}) Update with operations: MySQL: UPDATE user SET `age` = `age` + 3 WHERE `name` = ‘starlee’ Mongo: db.user.update({‘name’ : ’starlee’}, {$inc : {‘age’ : 3}}) countTotal: MySQL: SELECT COUNT(*) FROM user WHERE `name` = ’starlee’ Mongo: db.user.find({‘name’ : ’starlee’}).count() It can also be written like this: db.user.count({‘name’:’starlee’}) LIMITclause: MySQL: SELECT * FROM user limit 10,20 Mongo: db.user.find().skip(10).limit(20) INclause: MySQL: SELECT * FROM user…
Instructions for using updateapi interface of MongoDB database
From two aspects to explain. Shell command line description, java client API description; java client example: set = conn.prepareStatement(querysql+fromsql).executeQuery() ; while(set. next()){ DBObject q = new BasicDBObject().append(“T”, “a”) .append(“CI”, String. valueOf(set. getInt(1))) .append(“AI”, String.valueOf(set.getInt(3))); DBObject o = new BasicDBObject().append(“$set”, new BasicDBObject().append(“CN”, set.getString(2)) .append(“AN”, set.getString(4)).append(“S”, set. getString(5))); Mongodb.getDB(“yqflog”).getCollection(“info”).update(q, o, true , true ); } mongodb java client api: /** * calls {@link DBCollection#update(com.mongodb.DBObject, com.mongodb.DBObject, boolean, boolean, com.mongodb.WriteConcern)} with default Write Concern. * @param q search query for old object to update * @param o object with which to update q * @param upsert if the database should create the element if it does not exist * @param multi if the update should be applied to all objects matching (db version 1.1.3 and above) * * * See http://www.mongodb.org/display/DOCS/Atomic+Operations * @return * @throws MongoException * @dochub update */ public WriteResult update( DBObject q , DBObject o , boolean upsert , boolean multi ) throws MongoException { return update( q , o , upsert , multi , getWriteConcern() ); } shell command line: db.collection.update( criteria, objNew, upsert, multi ) Arguments: criteria – query which selects the record to update; objNew – updated object or $ operators (e.g., $inc) which manipulate the object…
How to use MongoDB and GridFS file system
GridFS is a way to store large files in MongoDB The file specification in the database. All officially supported drivers implement the GridFS specification. 1 Why use GridFS Since the size of BSON objects in MongoDB is limited, GridFS The specification provides a transparent mechanism that can split a large file into multiple smaller documents. This mechanism allows us to save large file objects efficiently, especially Don’t worry about those huge files, such as videos, high-definition pictures, etc. 2 How to realize mass storage The specification specifies a standard for chunking files. Each file will hold a file collection object One metadata object, one or more chunk block objects can be combined and stored in a chunk block collection. 3 Brief Introduction GridFS uses two tables to store data: files (contains metadata objects ) and chunks (binary chunks containing some other relevant information). In order for multiple GridFS to name a single database, files and blocks have a prefix, by default, the prefix is fs, so any default GridFS The store will include namespaces fs.files and fs.chunks. Drivers of various third-party languages have permission to change this prefix, so you can try to set another The GridFS namespace is used…
MongoDB and GridFS file system
GridFS is used to store and restore files that exceed 16M (BSON file limit). GridFS divides files into large blocks and stores each large block as a separate file. The maximum chunk limit in GridFS is 256k. GridFS uses two collection stores, one for chunks and one for metadata. fs.files and fs.chunks When should I use GridFS? http://docs.mongodb.org/manual/faq/developers/#faq-developers-when-to-use-gridfs the file Collection: the specific form is as follows { “_id” : , “length” : , “chunkSize” : “uploadDate” : “md5” : “filename”: , “contentType” : , “aliases” : , “metadata” : , } Documents in the files collection contain some or all of the following fields. Applications may create additional arbitrary fields: files._id The unique ID for this document. The _id is of the data type you chose for the original document. The default type for MongoDB documents is BSON ObjectID. files. length The size of the document in bytes. files.chunkSize The size of each chunk. GridFS divides the document into chunks of the size specified here. The default size is 256 kilobytes. files.uploadDate The date the document was first stored by GridFS. This value has the Date type. files.md5 An MD5 hash returned from the filemd5 API. This value has…