设置Sharding环境
使用simple-setup.py,修改
BASE_DATA_PATH='./data/db/sharding/'
注意因为没有使用os.path.expanduser,需要绝对路径或相对路径,不能使用用户路径~,否则会在当前目录创建。
admin.command('addshard', 'localhost:3000'+str(i), allowLocal=True, maxSize=3)
增加maxSize参数。
启动
需要最新的mongodb和pymongo
(sharding)ant@ant-r60:~/mongodb-linux-i686-1.6.1/bin$ lssitepackages
distribute-0.6.10-py2.5.egg gridfs pymongo setuptools.pth
easy-install.pth pip-0.7.2-py2.5.egg pymongo-1.8.1-py2.5.egg-info
运行
(sharding)ant@ant-r60:~/mongodb-linux-i686-1.6.1/bin$ python simple-setup.py
注意:需要在有网络的条件下使用,否则会遇到dns解析错误。
查看服务器
ant@ant-r60:~/mongodb-linux-i686-1.6.1/bin$ netstat -lntp|grep mongo
tcp 0 0 0.0.0.0:31002 0.0.0.0:* LISTEN 18942/mongod
tcp 0 0 0.0.0.0:20001 0.0.0.0:* LISTEN 18923/mongod
tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 18951/mongos
tcp 0 0 0.0.0.0:21001 0.0.0.0:* LISTEN 18923/mongod
tcp 0 0 0.0.0.0:28017 0.0.0.0:* LISTEN 18951/mongos
tcp 0 0 0.0.0.0:30001 0.0.0.0:* LISTEN 18931/mongod
tcp 0 0 0.0.0.0:30002 0.0.0.0:* LISTEN 18942/mongod
tcp 0 0 0.0.0.0:31001 0.0.0.0:* LISTEN 18931/mongod
查看数据目录
`-- [4.0K] db
`-- [4.0K] sharding
|-- [4.0K] config_1
| |-- [4.0K] _tmp
| |-- [ 64M] config.0
| |-- [128M] config.1
| |-- [ 16M] config.ns
| |-- [8.0K] diaglog.4c8790f6
| `-- [ 6] mongod.lock
|-- [4.0K] shard_1
| |-- [4.0K] _tmp
| |-- [ 6] mongod.lock
| |-- [ 64M] test.0
| |-- [128M] test.1
| `-- [ 16M] test.ns
`-- [4.0K] shard_2
`-- [ 6] mongod.lock
7 directories, 10 files
查看状态
> use admin
switched to db admin
> db.runCommand({listshards:1})
{
"shards" : [
{
"_id" : "shard0000",
"host" : "localhost:30001",
"maxSize" : NumberLong( 3 )
},
{
"_id" : "shard0001",
"host" : "localhost:30002",
"maxSize" : NumberLong( 3 )
}
],
"ok" : 1
}
> db.runCommand({isdbgrid:1})
{ "isdbgrid" : 1, "hostname" : "ant-r60", "ok" : 1 }
> db.runCommand({ismaster:1})
{ "ismaster" : 1, "msg" : "isdbgrid", "ok" : 1 }
准备测试数据文件
ant@ant-r60:~/mongodb-linux-i686-1.6.1/bin$ tree -h ~/Desktop/sharding/
/home/ant/Desktop/sharding/
|-- [ 67K] 1.chm
|-- [1.1M] 2.ppt
|-- [2.5M] 3.rar
|-- [3.9M] 4.rar
`-- [9.9M] 5.rar
0 directories, 5 files
增加小于1M的文件
ant@ant-r60:~/mongodb-linux-i686-1.6.1/bin$ ./mongofiles put -d test ~/Desktop/sharding/1.chm
connected to: 127.0.0.1
added file: { _id: ObjectId('4c87929262bf8428e17d12de'), filename: "/home/ant/Desktop/sharding/1.chm", chunkSize: 262144, uploadDate: new Date(1283953298598), md5: "f53821c3bcb5d971e548b1a9d7cf271d", length: 68350 }
done!
查看数据目录
`-- [4.0K] db
`-- [4.0K] sharding
|-- [4.0K] config_1
| |-- [4.0K] _tmp
| |-- [ 64M] config.0
| |-- [128M] config.1
| |-- [ 16M] config.ns
| |-- [8.0K] diaglog.4c8790f6
| `-- [ 6] mongod.lock
|-- [4.0K] shard_1
| |-- [4.0K] _tmp
| |-- [ 6] mongod.lock
| |-- [ 64M] test.0
| |-- [128M] test.1
| `-- [ 16M] test.ns
`-- [4.0K] shard_2
`-- [ 6] mongod.lock
7 directories, 10 files
增加1~2M的文件
ant@ant-r60:~/mongodb-linux-i686-1.6.1/bin$ ./mongofiles put -d test ~/Desktop/sharding/2.ppt
connected to: 127.0.0.1
added file: { _id: ObjectId('4c8796e78adb774331a70a87'), filename: "/home/ant/Desktop/sharding/2.ppt", chunkSize: 262144, uploadDate: new Date(1283954408277), md5: "9f14948c6a0f953f3714a3e5d46a2627", length: 1133056 }
done!
ant@ant-r60:~/mongodb-linux-i686-1.6.1/bin$ ./mongofiles list -d test
connected to: 127.0.0.1
/home/ant/Desktop/sharding/2.ppt 1133056
查看数据目录
.
`-- [4.0K] db
`-- [4.0K] sharding
|-- [4.0K] config_1
| |-- [4.0K] _tmp
| |-- [ 64M] config.0
| |-- [128M] config.1
| |-- [ 16M] config.ns
| |-- [ 16K] diaglog.4c8796ad
| `-- [ 6] mongod.lock
|-- [4.0K] shard_1
| |-- [4.0K] _tmp
| |-- [ 6] mongod.lock
| |-- [ 64M] test.0
| |-- [128M] test.1
| `-- [ 16M] test.ns
`-- [4.0K] shard_2
|-- [4.0K] _tmp
|-- [ 6] mongod.lock
|-- [ 64M] test.0
|-- [128M] test.1
`-- [ 16M] test.ns
8 directories, 13 files
查看状态
> db.printShardingStatus()
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{
"_id" : "shard0000",
"host" : "localhost:30001",
"maxSize" : NumberLong( 3 )
}
{
"_id" : "shard0001",
"host" : "localhost:30002",
"maxSize" : NumberLong( 3 )
}
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0000" }
test.bar chunks:
{ "key" : { $minKey : 1 } } -->> { "key" : { $maxKey : 1 } } on : shard0000 { "t" : 1000, "i" : 0 }
test.foo chunks:
{ "_id" : { $minKey : 1 } } -->> { "_id" : { $maxKey : 1 } } on : shard0000 { "t" : 1000, "i" : 0 }
test.fs.chunks chunks:
{ "files_id" : { $minKey : 1 } } -->> { "files_id" : ObjectId("4c8796e78adb774331a70a87") } on : shard0001 { "t" : 2000, "i" : 0 }
{ "files_id" : ObjectId("4c8796e78adb774331a70a87") } -->> { "files_id" : { $maxKey : 1 } } on : shard0000 { "t" : 2000, "i" : 1 }
test.fs.files chunks:
{ "_id" : { $minKey : 1 } } -->> { "_id" : { $maxKey : 1 } } on : shard0000 { "t" : 1000, "i" : 0 }
>
服务器日志输出
S1: Wed Sep 8 22:00:07 [conn4] autosplitting test.fs.chunks size: 1048832 shard: ns:test.fs.chunks at: shard0000:localhost:30001 lastmod: 1|0 min: { files_id: MinKey } max: { files_id: MaxKey } on: { files_id: ObjectId('4c8796e78adb774331a70a87') }(splitThreshold 943718)
S1: Wed Sep 8 22:00:07 [conn4] config change: { _id: "ant-r60-2010-09-08T14:00:07-0", server: "ant-r60", time: new Date(1283954407239), what: "split", ns: "test.fs.chunks", details: { before: { min: { files_id: MinKey }, max: { files_id: MaxKey } }, left: { min: { files_id: MinKey }, max: { files_id: ObjectId('4c8796e78adb774331a70a87') } }, right: { min: { files_id: ObjectId('4c8796e78adb774331a70a87') }, max: { files_id: MaxKey } } } }
M2: Wed Sep 8 22:00:07 [initandlisten] connection accepted from 127.0.0.1:40279 #5
S1: Wed Sep 8 22:00:07 [conn4] moving chunk (auto): ns:test.fs.chunks at: shard0000:localhost:30001 lastmod: 1|1 min: { files_id: MinKey } max: { files_id: ObjectId('4c8796e78adb774331a70a87') } to: shard0001:localhost:30002 #objects: 0
S1: Wed Sep 8 22:00:07 [conn4] moving chunk ns: test.fs.chunks moving ( ns:test.fs.chunks at: shard0000:localhost:30001 lastmod: 1|1 min: { files_id: MinKey } max: { files_id: ObjectId('4c8796e78adb774331a70a87') }) shard0000:localhost:30001 -> shard0001:localhost:30002
M1: Wed Sep 8 22:00:07 [conn5] got movechunk: { moveChunk: "test.fs.chunks", from: "localhost:30001", to: "localhost:30002", min: { files_id: MinKey }, max: { files_id: ObjectId('4c8796e78adb774331a70a87') }, shardId: "test.fs.chunks-files_id_MinKey", configdb: "localhost:20001" }
M2: Wed Sep 8 22:00:07 [initandlisten] connection accepted from 127.0.0.1:40280 #6
M1: Wed Sep 8 22:00:07 [initandlisten] connection accepted from 127.0.0.1:51462 #6
M2: Wed Sep 8 22:00:07 allocating new datafile ./data/db/sharding/shard_2/test.ns, filling with zeroes...
M2: Wed Sep 8 22:00:07 done allocating datafile ./data/db/sharding/shard_2/test.ns, size: 16MB, took 0 secs
M2: Wed Sep 8 22:00:07 allocating new datafile ./data/db/sharding/shard_2/test.0, filling with zeroes...
M2: Wed Sep 8 22:00:07 done allocating datafile ./data/db/sharding/shard_2/test.0, size: 64MB, took 0 secs
M2: Wed Sep 8 22:00:07 allocating new datafile ./data/db/sharding/shard_2/test.1, filling with zeroes...
M2: Wed Sep 8 22:00:07 done allocating datafile ./data/db/sharding/shard_2/test.1, size: 128MB, took 0 secs
M2: Wed Sep 8 22:00:07 [migrateThread] building new index on { _id: 1 } for test.fs.chunks
M2: Wed Sep 8 22:00:07 [migrateThread] Buildindex test.fs.chunks idxNo:0 { name: "_id_", ns: "test.fs.chunks", key: { _id: 1 } }
M2: Wed Sep 8 22:00:07 [migrateThread] done for 0 records 0secs
M2: Wed Sep 8 22:00:07 [migrateThread] info: creating collection test.fs.chunks on add index
M2: building new index on { files_id: 1 } for test.fs.chunks
M2: Wed Sep 8 22:00:07 [migrateThread] Buildindex test.fs.chunks idxNo:1 { ns: "test.fs.chunks", key: { files_id: 1 }, name: "files_id_1" }
M2: Wed Sep 8 22:00:07 [migrateThread] done for 0 records 0secs
M2: Wed Sep 8 22:00:07 [migrateThread] building new index on { files_id: 1, n: 1 } for test.fs.chunks
M2: Wed Sep 8 22:00:07 [migrateThread] Buildindex test.fs.chunks idxNo:2 { ns: "test.fs.chunks", key: { files_id: 1, n: 1 }, name: "files_id_1_n_1" }
M2: Wed Sep 8 22:00:07 [migrateThread] done for 0 records 0secs
M1: Wed Sep 8 22:00:08 [conn5] _recvChunkStatus : { active: true, ns: "test.fs.chunks", from: "localhost:30001", min: { files_id: MinKey }, max: { files_id: ObjectId('4c8796e78adb774331a70a87') }, state: "steady", counts: { cloned: 0, catchup: 0, steady: 0 }, ok: 1.0 }
M1: Wed Sep 8 22:00:08 [conn5] moveChunk locking myself to: 2|0
C1: Wed Sep 8 22:00:08 [initandlisten] connection accepted from 127.0.0.1:44110 #9
M1: Wed Sep 8 22:00:08 [conn5] moveChunk commit result: { active: true, ns: "test.fs.chunks", from: "localhost:30001", min: { files_id: MinKey }, max: { files_id: ObjectId('4c8796e78adb774331a70a87') }, state: "done", counts: { cloned: 0, catchup: 0, steady: 0 }, ok: 1.0 }
M2: Wed Sep 8 22:00:08 [migrateThread] config change: { _id: "ant-r60-2010-09-08T14:00:08-0", server: "ant-r60", time: new Date(1283954408251), what: "moveChunk.to", ns: "test.fs.chunks", details: { step1: 29, step2: 0, step3: 0, step4: 0, step5: 972 } }
M1: Wed Sep 8 22:00:08 [conn5] moveChunk updating self to: 2|1
M1: Wed Sep 8 22:00:08 [conn5] config change: { _id: "ant-r60-2010-09-08T14:00:08-0", server: "ant-r60", time: new Date(1283954408252), what: "moveChunk", ns: "test.fs.chunks", details: { min: { files_id: MinKey }, max: { files_id: ObjectId('4c8796e78adb774331a70a87') }, from: "shard0000", to: "shard0001" } }
M1: Wed Sep 8 22:00:08 [conn5] doing delete inline
M1: Wed Sep 8 22:00:08 [conn5] moveChunk deleted: 0
M1: Wed Sep 8 22:00:08 [conn5] config change: { _id: "ant-r60-2010-09-08T14:00:08-1", server: "ant-r60", time: new Date(1283954408257), what: "moveChunk.from", ns: "test.fs.chunks", details: { step1: 0, step2: 1, step3: 1, step4: 1000, step5: 3, step6: 0 } }
M1: Wed Sep 8 22:00:08 [conn5] query admin.$cmd ntoreturn:1 command: { moveChunk: "test.fs.chunks", from: "localhost:30001", to: "localhost:30002", min: { files_id: MinKey }, max: { files_id: ObjectId('4c8796e78adb774331a70a87') }, shardId: "test.fs.chunks-files_id_MinKey", configdb: "localhost:20001" } reslen:53 1012ms
M1: Wed Sep 8 22:00:08 [conn4] shardVersionOk failed ns:(test.fs.chunks) op:(insert) your version is too old ns: test.fs.chunks global: 2|1 client: 1|2
M1: Wed Sep 8 22:00:08 [conn3] query admin.$cmd ntoreturn:1 command: { writebacklisten: ObjectId('4c8796ad5efb05532966e02f') } reslen:84723 14045ms
增加大于6M的文件
ant@ant-r60:~/mongodb-linux-i686-1.6.1/bin$ ./mongofiles put -d test ~/Desktop/sharding/5.rar
connected to: 127.0.0.1
added file: { _id: ObjectId('4c87996f26bf8026d4ada84e'), filename: "/home/ant/Desktop/sharding/5.rar", chunkSize: 262144, uploadDate: new Date(1283955056741), md5: "35417ed4751230a70f1993839fb04357", length: 10332307 }
done!
ant@ant-r60:~/mongodb-linux-i686-1.6.1/bin$ ./mongofiles list -d test
connected to: 127.0.0.1
/home/ant/Desktop/sharding/5.rar 10332307
日志文件输出
S1: Wed Sep 8 22:10:55 [conn3] autosplitting test.fs.chunks size: 1048832 shard: ns:test.fs.chunks at: shard0000:localhost:30001 lastmod: 1|0 min: { files_id: MinKey } max: { files_id: MaxKey } on: { files_id: ObjectId('4c87996f26bf8026d4ada84e') }(splitThreshold 943718)
S1: Wed Sep 8 22:10:55 [conn3] config change: { _id: "ant-r60-2010-09-08T14:10:55-0", server: "ant-r60", time: new Date(1283955055448), what: "split", ns: "test.fs.chunks", details: { before: { min: { files_id: MinKey }, max: { files_id: MaxKey } }, left: { min: { files_id: MinKey }, max: { files_id: ObjectId('4c87996f26bf8026d4ada84e') } }, right: { min: { files_id: ObjectId('4c87996f26bf8026d4ada84e') }, max: { files_id: MaxKey } } } }
M2: Wed Sep 8 22:10:55 [initandlisten] connection accepted from 127.0.0.1:46956 #4
S1: Wed Sep 8 22:10:55 [conn3] moving chunk (auto): ns:test.fs.chunks at: shard0000:localhost:30001 lastmod: 1|1 min: { files_id: MinKey } max: { files_id: ObjectId('4c87996f26bf8026d4ada84e') } to: shard0001:localhost:30002 #objects: 0
S1: Wed Sep 8 22:10:55 [conn3] moving chunk ns: test.fs.chunks moving ( ns:test.fs.chunks at: shard0000:localhost:30001 lastmod: 1|1 min: { files_id: MinKey } max: { files_id: ObjectId('4c87996f26bf8026d4ada84e') }) shard0000:localhost:30001 -> shard0001:localhost:30002
M1: Wed Sep 8 22:10:55 [conn4] got movechunk: { moveChunk: "test.fs.chunks", from: "localhost:30001", to: "localhost:30002", min: { files_id: MinKey }, max: { files_id: ObjectId('4c87996f26bf8026d4ada84e') }, shardId: "test.fs.chunks-files_id_MinKey", configdb: "localhost:20001" }
C1: Wed Sep 8 22:10:55 [initandlisten] connection accepted from 127.0.0.1:55828 #8
M2: Wed Sep 8 22:10:55 [initandlisten] connection accepted from 127.0.0.1:46958 #5
M1: Wed Sep 8 22:10:55 [initandlisten] connection accepted from 127.0.0.1:40666 #5
M2: Wed Sep 8 22:10:55 allocating new datafile ./data/db/sharding/shard_2/test.ns, filling with zeroes...
M2: Wed Sep 8 22:10:55 done allocating datafile ./data/db/sharding/shard_2/test.ns, size: 16MB, took 0 secs
M2: Wed Sep 8 22:10:55 allocating new datafile ./data/db/sharding/shard_2/test.0, filling with zeroes...
M2: Wed Sep 8 22:10:55 done allocating datafile ./data/db/sharding/shard_2/test.0, size: 64MB, took 0 secs
M2: Wed Sep 8 22:10:55 allocating new datafile ./data/db/sharding/shard_2/test.1, filling with zeroes...
M2: Wed Sep 8 22:10:55 [migrateThread] building new index on { _id: 1 } for test.fs.chunks
M2: Wed Sep 8 22:10:55 [migrateThread] Buildindex test.fs.chunks idxNo:0 { name: "_id_", ns: "test.fs.chunks", key: { _id: 1 } }
M2: Wed Sep 8 22:10:55 [migrateThread] done for 0 records 0secs
M2: Wed Sep 8 22:10:55 [migrateThread] info: creating collection test.fs.chunks on add index
M2: building new index on { files_id: 1 } for test.fs.chunks
M2: Wed Sep 8 22:10:55 [migrateThread] Buildindex test.fs.chunks idxNo:1 { ns: "test.fs.chunks", key: { files_id: 1 }, name: "files_id_1" }
M2: Wed Sep 8 22:10:55 [migrateThread] done for 0 records 0secs
M2: Wed Sep 8 22:10:55 [migrateThread] building new index on { files_id: 1, n: 1 } for test.fs.chunks
M2: Wed Sep 8 22:10:55 [migrateThread] Buildindex test.fs.chunks idxNo:2 { ns: "test.fs.chunks", key: { files_id: 1, n: 1 }, name: "files_id_1_n_1" }
M2: Wed Sep 8 22:10:55 [migrateThread] done for 0 records 0secs
M2: Wed Sep 8 22:10:55 done allocating datafile ./data/db/sharding/shard_2/test.1, size: 128MB, took 0.021 secs
M1: Wed Sep 8 22:10:56 [conn4] _recvChunkStatus : { active: true, ns: "test.fs.chunks", from: "localhost:30001", min: { files_id: MinKey }, max: { files_id: ObjectId('4c87996f26bf8026d4ada84e') }, state: "steady", counts: { cloned: 0, catchup: 0, steady: 0 }, ok: 1.0 }
M1: Wed Sep 8 22:10:56 [conn4] moveChunk locking myself to: 2|0
C1: Wed Sep 8 22:10:56 [initandlisten] connection accepted from 127.0.0.1:55831 #9
M1: Wed Sep 8 22:10:56 [conn4] moveChunk commit result: { active: true, ns: "test.fs.chunks", from: "localhost:30001", min: { files_id: MinKey }, max: { files_id: ObjectId('4c87996f26bf8026d4ada84e') }, state: "done", counts: { cloned: 0, catchup: 0, steady: 0 }, ok: 1.0 }
M1: Wed Sep 8 22:10:56 [conn4] moveChunk updating self to: 2|1
M1: Wed Sep 8 22:10:56 [conn4] config change: { _id: "ant-r60-2010-09-08T14:10:56-0", server: "ant-r60", time: new Date(1283955056470), what: "moveChunk", ns: "test.fs.chunks", details: { min: { files_id: MinKey }, max: { files_id: ObjectId('4c87996f26bf8026d4ada84e') }, from: "shard0000", to: "shard0001" } }
M1: Wed Sep 8 22:10:56 [conn4] doing delete inline
M1: Wed Sep 8 22:10:56 [conn4] moveChunk deleted: 0
M1: Wed Sep 8 22:10:56 [conn4] config change: { _id: "ant-r60-2010-09-08T14:10:56-1", server: "ant-r60", time: new Date(1283955056470), what: "moveChunk.from", ns: "test.fs.chunks", details: { step1: 1, step2: 2, step3: 1, step4: 1000, step5: 11, step6: 0 } }
M1: Wed Sep 8 22:10:56 [conn4] query admin.$cmd ntoreturn:1 command: { moveChunk: "test.fs.chunks", from: "localhost:30001", to: "localhost:30002", min: { files_id: MinKey }, max: { files_id: ObjectId('4c87996f26bf8026d4ada84e') }, shardId: "test.fs.chunks-files_id_MinKey", configdb: "localhost:20001" } reslen:53 1017ms
M2: Wed Sep 8 22:10:56 [migrateThread] config change: { _id: "ant-r60-2010-09-08T14:10:56-0", server: "ant-r60", time: new Date(1283955056476), what: "moveChunk.to", ns: "test.fs.chunks", details: { step1: 29, step2: 0, step3: 0, step4: 0, step5: 980 } }
M1: Wed Sep 8 22:10:56 [conn2] shardVersionOk failed ns:(test.fs.chunks) op:(insert) your version is too old ns: test.fs.chunks global: 2|1 client: 1|2
M1: Wed Sep 8 22:10:56 [conn2] shardVersionOk failed ns:(test.fs.chunks) op:(query) your version is too old ns: test.fs.chunks global: 2|1 client: 1|2
S1: Wed Sep 8 22:10:56 [conn3] ~ScopedDBConnection: _conn != null
M1: Wed Sep 8 22:10:56 [conn3] query admin.$cmd ntoreturn:1 command: { writebacklisten: ObjectId('4c87995f6ac1a7a1008a5bec') } reslen:262387 1086ms
S1: Wed Sep 8 22:10:56 [conn3] ERROR: splitIfShould failed: ns: test.fs.chunks findOne has stale config
M1: Wed Sep 8 22:10:56 [conn2] end connection 127.0.0.1:40654
M1: Wed Sep 8 22:10:56 [initandlisten] connection accepted from 127.0.0.1:40668 #6
**S1: Wed Sep 8 22:10:56 [conn3] SHARD PROBLEM** shard is too big, but can't split: ns:test.fs.chunks at: shard0000:localhost:30001 lastmod: 2|1 min: { files_id: ObjectId('4c87996f26bf8026d4ada84e') } max: { files_id: MaxKey }**
S1: Wed Sep 8 22:10:56 [conn3] end connection 127.0.0.1:55768
S1: Wed Sep 8 22:10:59 [Balancer] no availalable shards to take chunks
...
S1: Wed Sep 8 22:11:28 connection accepted from 127.0.0.1:55779 #4
S1: Wed Sep 8 22:11:28 [conn4] end connection 127.0.0.1:55779
S1: Wed Sep 8 22:13:08 connection accepted from 127.0.0.1:55780 #5
S1: Wed Sep 8 22:13:18 [conn5] creating WriteBackListener for: localhost:20001
M1: Wed Sep 8 22:13:19 [initandlisten] connection accepted from 127.0.0.1:40671 #7
M2: Wed Sep 8 22:13:19 [initandlisten] connection accepted from 127.0.0.1:46965 #6
C1: Wed Sep 8 22:13:19 [initandlisten] connection accepted from 127.0.0.1:55837 #10