Diese Artikel erwarten Sie:
· Die Probleme der Kunden verstehen und ernst nehmen …“, Interview mit Fried Saacke
· MySQL 8 XDevAPI: Neue Wege für Entwickler moderner Applikationen, Mario Beck und Carsten Thalheimer
· MySQL HA – Lösungen für Backund Frontend, Matthias Klein
· Wenn Daten historisch wachsen, Antoniya Kuhlmeyer
· Keine Angst vor Key-Value-Stores: Einblicke in die Oracle NoSQL DB, Karin Patenge
· NoSQL, NewSQL und Cloud-native Datenbanken, Andreas Buckenhofer
· PostgreSQL – der neue Standard für Allzweck-Datenbanken, Jan Karremans
· Warum PostgreSQL momentan so erfolgreich ist, Daniel Westermann
· Oracle, PostgreSQL, Docker und Kubernetes bei der Mobiliar, Hans Eichenberger und Daniel Westermann
· HDFS vs. NoSQL vs. RDBMS – welcher Datastore für welches Projekt?, Dr. Nadine Schöne und Enno Schulte
· Tipps für die Konfiguration einer VM zur optimalen Unterstützung der Oracle-Datenbank PostgreSQL hat in den vergangenen Jahren enorm an Bedeutung gewonnen
· Oracle bietet mehr Daten-Management-Systeme als nur die Oracle-Datenbank
· Oracle auf VMware, Yvonne Murphy
· Realistischere Kosten für den Tabellen-Zugriff über einen Index, Clemens Bleile
· Alternative Fakten − das Unbeeinflussbare beeinflussen, Stefan Winkler
· Das Beste aus zwei Welten, Bruno Cirone
· Vom Nutzen der Best Practices, Jürgen Sieben
· Tipps und Tricks aus Gerds Fundgrube: UTF-8 in CSV-Dateien und das Problem mit Excel, Gerd Volberg
· Optimale Vorbereitung auf Oracle-Zertifizierungen, Rainer Schaub
Hier finden Sie im Magazin erwähnte Listings:
Karin Patenge
"Keine Angst vor Key-Value-Stores: Einblicke in die Oracle NoSQL DB"
[oracle@bigdatalite ~]$ echo $KVHOME
/u01/nosql/kv-ee
[oracle@bigdatalite ~]$ echo $KVROOT
/u02/kvroot
[oracle@bigdatalite ~]$ java -jar $KVHOME/lib/kvstore.jar ping -port 5000 -host bigdatalite.localdomain
Pinging components of store kvstore based upon topology sequence #14
10 partitions and 1 storage nodes
Time: 2018-10-23 15:48:51 UTC Version: 12.2.4.5.12
Shard Status: healthy:1 writable-degraded:0 read-only:0 offline:0
Admin Status: healthy
Zone [name=KVLite id=zn1 type=PRIMARY allowArbiters=false] RN Status: online:1 offline:0
Storage Node [sn1] on bigdatalite.localdomain:5000 Zone: [name=KVLite id=zn1 type=PRIMARY allowArbiters=false] Status: RUNNING Ver: 12cR2.4.5.12 2017-08-18 03:27:12 UTC Build id: c79a4586d1b9
Admin [admin1] Status: RUNNING,MASTER
Rep Node [rg1-rn1] Status: RUNNING,MASTER sequenceNumber:64,728,163 haPort:5006
[oracle@bigdatalite ~]$
[oracle@bigdatalite ~]$ java -jar $KVHOME/lib/kvcli.jar -host bigdatalite.localdomain -port 5000 -store kvstore
kv->
kv-> show topology
store=kvstore numPartitions=10 sequence=14
zn: id=zn1 name=KVLite repFactor=1 type=PRIMARY allowArbiters=false
sn=[sn1] zn:[id=zn1 name=KVLite] bigdatalite.localdomain:5000 capacity=1 RUNNING
[rg1-rn1] RUNNING
No performance info available
shard=[rg1] num partitions=10
[rg1-rn1] sn=sn1
kv-> verify configuration
Verify: starting verification of store kvstore based upon topology sequence #14
10 partitions and 1 storage nodes
Time: 2018-10-23 15:49:59 UTC Version: 12.2.4.5.12
See bigdatalite.localdomain:/u02/kvroot/kvstore/log/kvstore_{0..N}.log for progress messages
Verify: Shard Status: healthy:1 writable-degraded:0 read-only:0 offline:0
Verify: Admin Status: healthy
Verify: Zone [name=KVLite id=zn1 type=PRIMARY allowArbiters=false] RN Status: online:1 offline:0
Verify: == checking storage node sn1 ==
Verify: sn1: The root directory on sn1 does not have a size specified: /u02/kvroot
Verify: Storage Node [sn1] on bigdatalite.localdomain:5000 Zone: [name=KVLite id=zn1 type=PRIMARY allowArbiters=false] Status: RUNNING Ver: 12cR2.4.5.12 2017-08-18 03:27:12 UTC Build id: c79a4586d1b9
Verify: Admin [admin1] Status: RUNNING,MASTER
Verify: Rep Node [rg1-rn1] Status: RUNNING,MASTER sequenceNumber:64,728,163 haPort:5006
Verification complete, 0 violations, 1 note found.
Verification note: [sn1] The root directory on sn1 does not have a size specified: /u02/kvroot
kv-> quit
Listing 2: Laden von Graph Daten in Oracle NoSQL DB
---
server = new ArrayList();
server.add("bigdatalite.localdomain:5000");
// Graph configuration
// Graph name: "meetup"
// KV store name: "kvstore"
cfg = GraphConfigBuilder.forPropertyGraphNosql() \
.setName("meetup") \
.setStoreName("kvstore") \
.setHosts(server) \
.setMaxNumConnections(2) \
.hasEdgeLabel(true) \
.setLoadEdgeLabel(true) \
.addVertexProperty("type", PropertyType.STRING, "NA") \
.addVertexProperty("city_name", PropertyType.STRING, "NA") \
…
.setLoadVertexLabels(true) \
.setUseVertexPropertyValueAsLabel("type") \
.setPropertyValueDelimiter(",") \
.build();
opg = OraclePropertyGraph.getInstance(cfg);
opg.getKVStoreConfig();
// Prepare for data load
opg.setClearTableDOP(2);
opg.clearRepository();
opgdl=OraclePropertyGraphDataLoader.getInstance();
vfile="/home/oracle/Documents/Meetup/data/meetup.opv";
efile="/home/oracle/Documents/Meetup/data/meetup.ope";
// Load data
opgdl.loadData(opg, vfile, efile, 2);
Listing 3: Laden von Graph-Daten in die Oracle NoSQL DB
---
Clemens Bleile
"Realistischere Kosten für den Tabellen-Zugriff über einen Index"
oracle@18cR0:/home/oracle/TABLE_CACHED_BLOCKS/ [gen180] cat parallel_insert_ti3.bash
#!/bin/bash
sqlplus -S / <<EOF
drop table ti3 purge;
create table ti3 as select * from ti1 where 1=2;
drop sequence ti3s;
create sequence ti3s order;
exit
EOF
sqlplus -S / <<EOF &
begin
for i in (select * from ti2 where mod(object_id,3)=0) loop
insert into ti3 values (i.owner, i.object_name, i.subobject_name, ti3s.nextval,
i.data_object_id, i.object_type, i.created, i.last_ddl_time,
i.timestamp, i.status, i.temporary, i.generated,
i.secondary, i.namespace, i.edition_name, i.sharing,
i.editionable, i.oracle_maintained, i.application,
i.default_collation, i.duplicated, i.sharded,
i.created_appid, i.created_vsnid, i.modified_appid,
i.modified_vsnid);
commit;
end loop;
end;
/
exit
EOF
sqlplus -S / <<EOF &
begin
for i in (select * from ti2 where mod(object_id,3)=1) loop
insert into ti3 values (i.owner, i.object_name, i.subobject_name, ti3s.nextval,
i.data_object_id, i.object_type, i.created, i.last_ddl_time,
i.timestamp, i.status, i.temporary, i.generated,
i.secondary, i.namespace, i.edition_name, i.sharing,
i.editionable, i.oracle_maintained, i.application,
i.default_collation, i.duplicated, i.sharded,
i.created_appid, i.created_vsnid, i.modified_appid,
i.modified_vsnid);
commit;
end loop;
end;
/
exit
EOF
sqlplus -S / <<EOF &
begin
for i in (select * from ti2 where mod(object_id,3)=2) loop
insert into ti3 values (i.owner, i.object_name, i.subobject_name, ti3s.nextval,
i.data_object_id, i.object_type, i.created, i.last_ddl_time,
i.timestamp, i.status, i.temporary, i.generated,
i.secondary, i.namespace, i.edition_name, i.sharing,
i.editionable, i.oracle_maintained, i.application,
i.default_collation, i.duplicated, i.sharded,
i.created_appid, i.created_vsnid, i.modified_appid,
i.modified_vsnid);
commit;
end loop;
end;
/
exit
EOF
wait
sqlplus -S / <<EOF
drop index ti3_i1;
set echo on
exec dbms_stats.gather_table_stats(user,'TI3');
create unique index ti3_i1 on ti3 (object_id) pctfree 30;
exit
EOF
oracle@18cR0:/home/oracle/TABLE_CACHED_BLOCKS/ [gen180] ./parallel_insert_ti3.bash
Listing 7
---


