Exadata DataBase Machine Overview
This article is going to provide the overview about the Oracle’s Exadata database
machine and the benefits of this engineering system. Oracle Database machine is fully
integrated with oracle databases and it uses Exadata storage servers.This machine
provides high performance and high availability for all type of database workloads.It
also completely eliminates the all the bottlenecks and mainly IOPS. So you can simply
consolidate multiple databases to one single database machine.These machines are
very easy to deploy since one single script will be doing the job for you with help
of Oracle Exadata Deployment Assistant (which generates the XML input file ).
Why do we need the database machines ?
Data warehousing issues:
Oracle database machine will support large and complex queries.Since the storage is
connected to high speed infiniband networks ,you will be getting more than enough I/O
throughput to support massive scans.Using the smart scan feature, it reduces
unproductive I/O. It aslo supports the parallel processing to improve the system
performance.It uses the hybrid column compression to reduce the storage space.
OLTP issues:
Database machines are completely eliminating the OLTP issues. It supports large user
populations and transactions by providing the enough I/Os per second and caching
frequently accessed data.It provides the consistent performance across all the tables
and minimizing the IO latency.
Consolidation Issues:
To reduce the datacenter space , most of the companies are behind the virtualization
technologies for small and mid range operations.How this database machine is
address the consolidation issues ? In database machine , you can accommodate
multiple workloads on same box instead of having the multiple database servers on
your environment. You can also prioritizing the workloads but it requires proper
analysis and planning prior to the implementation.
Configuration issues:
Database eliminates the configuration issues completely. Since only oracle is going to
provide the support for complete database machine components , all the hardwares
and firmwares will be compatibility with the oracle database software.It also creates
the well-balanced configuration across the database machine to eliminates the
bottlenecks.
Database machine consists following components on it.
1.
Exadata Storage servers (cell)
2.
Computing Nodes (Database servers – Typical oracle Linux x86_64 servers )
3.
Infiniband Switches (Internal networking )
4.
Cisco switches. (External networking )
5.
Power distribution units
database machine x3-2 Full Rack
Full Rack
No of Exadata Stroage
cells
No of Database
Servers
No of Infiniband
Switches
14
8
3
Half Rack
7
3
3
Quarter
Rack
3
2
2
Exadata Storage servers:(cell)
Exadata storage server is exclusively designed for oracle database.It is a self contained
storage platform and runs the Exadata storage software. Databases are typically
deployed across multiple exadata storage servers to provide the vast performance. The
databases (compute nodes ) and cell communicate with each other via infiniband
network.(40Gb/s). Exadata storage server runs on oracle Linux x86_64 and storage is
managed by Exadata cell software.
You can’t allocate the Exadata storage to non-oracle database servers. The Exadata
storage servers are exclusively designed to provide the storage to oracle databases
within the Rack.
Exadata – Quarter Rack Example
Exadata Storage Server X3-2 – Hardware Overview
Processors
12 Intel CPU cores
System Memory
64GB
Disk Drives (If HPD)
12×600 GB 15K RPM
Disk Drives (If HCD)
12x3TB 7.2K RPM
Flash
1.6 TB
Disk Controller
Disk Controller Host Bus Adapter
with 512 MB Battery Backed Write Cache
InfiniBand Network
Dual-Port QDR (40Gb/s) InfiniBand Host Channel Adapter
Remote Management
Integrated Lights Out Manager (ILOM) Ethernet Port
Power Supplies
2 x Redundant Hot-Swappable Power Supplies
Exadata Storage Server X3-2 Configuration Options
Exadata storage servers is available in two type of configurations. q.HP(High
Performance) disks 2. HC (High Capacity)disks. If you are looking for more storage
space, then you need to choose the high capacity disk type (Ex: Dataware housing). If
you need high performance (Ex: OLTP), You have to choose high performance
disks.Please the below table to know the differences between the both configuration
types.
Raw Disk Capacity
High Performance Disks
High Capacity Disks
7.2TB
36TB
Raw Disk Throughput 1.8 GB/sec
1.3 GB/sec
Flash Throughput
6.75 GB/sec
7.25 GB/sec
X3-2 Database Server Hardware: Overview
Configuration:
Processors
16 Intel CPU Cores
System Memory
256GB
Disk Drives
4 x 300 GB 10K RPM Disk Drives
Disk Controller
Disk Controller Host Bus Adapter
with 512 MB Battery Backed Write Cache
Remote Management
Integrated Lights Out Manager (ILOM) Ethernet Port
Power Supplies
2 x Redundant Hot-Swappable Power Supplies
Network Interfaces
• Dual-Port QDR (40Gb/s) InfiniBand Host Channel Adapter
• Four 1/10 Gb Ethernet Ports (copper)
• Two 10Gb Ethernet Ports (optical)
Database Machine X3-8 is only offered in a Full Rack.
Exadata – Database machine x3-8 Full Rack
Both X3-2 and X3-8 database machines contain 14 Exadata X3-2 cells, 3 InfiniBand
switches, 2 power distribution units (PDUs) and an Ethernet switch.The difference is
with the database server configuration. It just has 2 computing nodes where as X3-2
has 8 computing nodes. But x3-8 database servers has more cpu cores and physical
memory.
X3-8 database Machine configuration:
Processors
80 Intel CPU Cores
System Memory
2TB
Disk Drives
4 x 300 GB 10K RPM Disk Drives
Disk Controller
Disk Controller Host Bus Adapter
with 512 MB Battery Backed Write Cache
Remote Management
Integrated Lights Out Manager (ILOM) Ethernet Port
Power Supplies
4x Redundant Hot-Swappable Power Supplies
Network Interfaces
• Dual-Port QDR (40Gb/s) InfiniBand Host Channel Adapter
• Four 1/10 Gb Ethernet Ports (copper)
• Two 10Gb Ethernet Ports (optical)
Exadata Storage Rack Expansions
For an example , you have fully utilized the full rack of the database machine and
running out of the Exadata storage cell space.How do you scale up the environment ?
Should we need to order another database machine ? No. We just require Exadata
storage servers . So we need to order for the Exadata Storage Expansion Racks along
with Exadata storage servers and infiniband switches.
The full rack of the Exadata storage expansion can accommodate 18 Exadata storage
servers and 3 infiniband Switches.In Half Rack , you can accommodate 9 Exadata
storage servers and 3 infiniband switches. In Quarter rack , you can have only 4
Exadata storage servers and 2 Infiniband switches.
Infiniband Network Overview
Infiniband provides the inter-connectivity between the database servers and Exadata
storage servers with the speed of 40Gb/s .It is used for storage networking ,RAC
interconnect and high performance external connectivity.It uses the ZDP protocol (Zero
loss Zero Copy Datagram Protocol)so that very low CPU overhead required.
To explorer theX3-2 Exadata database machine 3D view,check out the below
link, http://oracle.com.edgesuite.net/producttours/3d/exadata-x3-2/index.html
Architecture of Exadata Database Machine
Exadata Database machine provides a high performance,high availability and plenty of
storage space platform for oracle database .The high availability clustering is is
provided by Oracle RAC and ASM will be responsible for storage mirroring .Infiniband
technology provides high bandwidth and low latency cluster interconnect and storage
networking. The powerful compute nodes joins in the RAC cluster to offers the great
performance.
In this article, we will see the
Exadata Database Machine Network architecture
Exadata Database Machine Storage architecture
Exadata Database Machine Software architecture.
How to scale up the Exadata Database Machine
Key components of the Exadata Database Machine
Shared storage: Exadata Storage servers
Database Machine provides intelligent, high-performance shared storage to both
single-instance and RAC implementations of Oracle Database using Exadata Storage
Server technology.The Exadata storage servers is designed to provide the storage to
oracle database using the ASM (Automatic Storage Management). ASM keeps the
redundant copies of data on separate Exadata Storage Servers and it protects against
the data loss if you lost the disk or entire storage server.
Shared Network – Infiniband
Database machine uses the infiniband network for interconnect between database
servers and exadata storage servers. The infiniband network provides 40Gb/s speed.So
the latency is very low and offers the high bandwidth. In Exadata Database machine ,
multiple infiniband switches and interface boning will be used to provide the network
redundancy.
Shared cache:
The database machine’s RAC environment, the database instance buffer cache are
shared. If one instance has kept some data on cache and that required by another
instance,the data will be provided to the required node via infiniband cluster
interconnect. It increases the performance since the data is happening between
memory to memory via cluster interconnect.
Database Server cluster:
The Exadata database machine’s full rack consists , 8 compute nodes and you can able
to build the 8-n0de cluster using the oracle RAC. The each compute nodes has up to 80
CPU cores and 256GB memory .
Cluster interconnect:
By default, the database machine is configured use the infiniband storage network as
cluster interconnect.
Database Machine – Network Architecture
There are three different networks has been shown on the above diagram.
Management Network – ILOM:
ILOM(Integrated lights out manager) is the default remote hardware management on
all oracle servers.It uses the traditional Ethernet network to manage the exadata
database machine remotely. ILOM provides the graphical remote administration facility
and it also helps the system administrators to monitor the hardware remotely.
Client Access:
The database servers will be accessed by application servers via Ethernet network.
Bonding will be created using multiple ethernet adapters for network redundancy and
load balancing.Note: This database machine consists Cisco switch to provide the
connectivity to ethernet networks.
InfiniBand Network Architecture
The below diagrams shows that how the infiniband links are connected to different
components on X3-2 Half/Full Rack setup.
infiniband switch x3-2 half-full rack
The spine switch will be exists only on half rack and full rack exadata database
configuration only. The spine switch will help you to scale the environment by providing
the Inifiniband links to multiple racks. In the quarter rack of X3-2 model, you will get
leaf switches . You can scale up to 18 rack by adding the infiniband cables to the
infiniband switches.
How we can interconnect two racks ? Have a look at the below diagram closely.Single
InfiniBand network formed based on a Fat Tree topology
Scale two Racks
Six ports on each leaf switch are reserved for external connectivity.These ports are
used for Connecting to media servers for tape backup,Connecting to external ETL
servers,Client or application access Including Oracle Exalogic Elastic Cloud
Database Machine Software Architecture
Software architecture- exadata
CELLSRV, MS,RS & IORM are the important process of the exadata storage cell servers.
In the DB servers , these storage’s griddisks are used to create the ASM diskgroup.In
the database server, there will be special library called LIBCELL. In combination with the
database kernel and ASM, LIBCELL transparently maps database I/O to exadata storage
server.
There is no other filesystems are allowed to create in Exadata storage cell. Oracle
Database must use the ASM for volume manager and filesystem.
Customers has option to choose the database servers operating system between
oracle Linux and oracle Solaris x86 . Exadata will support the oracle database 11g
release 2 and laster versions of database.
Database Machine Storage Architecture
Exadata Storage cell
Exadata storage servers has above mentioned software components. Oracle Linux is
the default operating system for exadata storage cell software . CELLSRV is the core
exadata storage component which provides the most of the services. Management
Server (MS) provides Exadata cell management and configuration.MS is responsible for
sending alerts and collects some statistics in addition to those collected by
CELLSRV.Restart Server (RS) is used to start up/shut down the CELLSRV and MS services
and monitors these services to automatically restart them if required.
How the disks are mapped to Database from the Exadata storage servers ?
Exadata Disks overview
If you look the below image , you can observe that database servers is considering the
each cell nodes as failure group.
Exadata DG
Exploring the Exadata Storage Cell Processes
Exadata storage cell is new to the industry and only oracle is offering such a
customized storage for oracle database. Unlike the traditional SAN storage ,Exadata
data storage will help to reduce the processing at the DB node level. Since the exadata
storage cell has its own processors and 64GB physical memory , it can easily offload the
DB nodes. It has huge amount of Flash storage to speed up the I/O .The default Flash
cache settings is write through. These flash can also be used as storage (like harddrive).
Flash devices can give 10x better performance than normal harddrive.
Examine the Exadata Storage cell Processes
1. Login to Exadata storage cell .
login as: root
[email protected]'s password:
Last login: Sat Nov 15 01:50:58 2014
[root@uaexacell1 ~]#
[root@uaexacell1 ~]# uname -a
Linux uaexacell1 2.6.39-300.26.1.el5uek #1 SMP Thu Jan 3 18:31:38 PST 2013 x86_64 x86_64 x86_64
GNU/Linux
[root@uaexacell1 ~]#
2.List the exadata cell restart server process.(RS)
[root@uaexacell1 ~]# ps -ef |grep cellrs
root
10001
1 0 14:23 ?
00:00:00 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/bin/cellrssrm
-ms 1 -cellsrv 1
root
10009 10001 0 14:23 ?
00:00:00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/bin/cellrsmmt -ms 1 -cellsrv 1
root
10010 10001 0 14:23 ?
00:00:00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/bin/cellrsomt -ms 1 -cellsrv 1
root
10011 10001 0 14:23 ?
00:00:00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/bin/cellrsbmt -ms 1 -cellsrv 1
root
10012 10011 0 14:23 ?
00:00:00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/bin/cellrsbkm -rs_conf
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/config/cellinit.ora -ms_conf
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/config/cellrsms.state -cellsrv_conf
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/config/cellrsos.state -debug 0
root
10022 10012 0 14:23 ?
00:00:00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/bin/cellrssmt -rs_conf
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/config/cellinit.ora -ms_conf
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/config/cellrsms.state -cellsrv_conf
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/config/cellrsos.state -debug 0
root
12992 12945 0 14:48 pts/2
[root@uaexacell1 ~]#
00:00:00 grep cellrs
RS – Restart server process is responsible to make the cellsrv & ms process up for all
the time. If these process are not responding or terminated, automatically RS(restart
server) , will restart the cellsrv & ms process.
3.List the MS process. (Management Server process). MS maintains the cell
configuration with the help of cellcli(command line utility). It also responsible for
sending alerts and collecting the exadata cell statistics.
[root@uaexacell1 ~]# ps -ef | grep ms.err
root
10013 10009 1 14:23 ?
00:00:21 /usr/java/jdk1.5.0_15/bin/java -Xms256m -Xmx512m
-Djava.library.path=/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/lib -Ddisable.checkForUpdate=true
-jar /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/oc4j/ms/j2ee/home/oc4j.jar -out
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/log/ms.lst -err
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/log/ms.err
root
13945 12945 0 14:56 pts/2
00:00:00 grep ms.err
[root@uaexacell1 ~]#
MS(Management server) process’s parent process id belongs to RS (restart server).RS
will restart the MS when it crashes or terminated abnormally.
4.CELLSRV is multi-threaded process which provides the storage services to the
database nodes. CELLSRV communicates with oracle database to serve simple block
requests,such as database buffer cache reads and smart scan requests. You list the
cellsrv process using below mentioned command.
[root@uaexacell1 ~]# ps -ef | grep "/cellsrv "
root
5705 10010 8 19:13 ?
00:08:20 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/bin/cellsrv
100 5000 9 5042
1000
8390 4457 0 20:57 pts/1
00:00:00 grep /cellsrv
[root@uaexacell1 ~]#
CELLSRV process’s parent process id belongs to RS process(restart server).RS will
restart the CELLRSV when it crashes or terminated abnormally.
5.Let me kill the MS process and see if it restarts automatically.
[root@uaexacell1 ~]# ps -ef |grep ms.err
root
10013 10009 0 14:23 ?
00:00:23 /usr/java/jdk1.5.0_15/bin/java -Xms256m -Xmx512m
-Djava.library.path=/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/lib -Ddisable.checkForUpdate=true
-jar /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/oc4j/ms/j2ee/home/oc4j.jar -out
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/log/ms.lst -err
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/log/ms.err
root
15220 12945 0 15:06 pts/2
00:00:00 grep ms.err
[root@uaexacell1 ~]# kill -9 10013
[root@uaexacell1 ~]# ps -ef |grep ms.err
root
15245 12945 0 15:07 pts/2
00:00:00 grep ms.err
[root@uaexacell1 ~]# ps -ef |grep ms.err
root
15249 12945 0 15:07 pts/2
00:00:00 grep ms.err
[root@uaexacell1 ~]#
within few seconds another MS process has started with new PID.
[root@uaexacell1 ~]# ps -ef |grep ms.err
root
15366 10009 74 15:07 ?
00:00:00 /usr/java/jdk1.5.0_15/bin/java -Xms256m -Xmx512m
-Djava.library.path=/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/lib -Ddisable.checkForUpdate=true
-jar /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/oc4j/ms/j2ee/home/oc4j.jar -out
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/log/ms.lst -err
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/log/ms.err
root
15379 12945 0 15:07 pts/2
00:00:00 grep ms.err
[root@uaexacell1 ~]#
6.How to stop and start the services on exadata storage cell using the init scripts ? Its
like other start up scripts will be located on /etc/init.d and link has been added to
/etc/rc3.d to bring up the cell process on the start-up.
[root@uaexacell1 ~]# cd /etc/init.d
[root@uaexacell1 init.d]# ls -lrt |grep cell
lrwxrwxrwx 1 root root
50 Nov 15 01:15 celld -> /opt/oracle/cell/cellsrv/deploy/scripts/unix/celld
[root@uaexacell1 init.d]# cd /etc/rc3.d
[root@uaexacell1 rc3.d]# ls -lrt |grep cell
lrwxrwxrwx 1 root root 15 Nov 15 01:15 S99celld -> ../init.d/celld
[root@uaexacell1 rc3.d]#
This script can be used to start, stop, restart the exadata cell software.
To stop the cell software
[root@uaexacell1 rc3.d]# ./S99celld stop
Stopping the RS, CELLSRV, and MS services...
The SHUTDOWN of services was successful.
[root@uaexacell1 rc3.d]#
To start the cell software
[root@uaexacell1 rc3.d]# ./S99celld start
Starting the RS, CELLSRV, and MS services...
Getting the state of RS services... running
Starting CELLSRV services...
The STARTUP of CELLSRV services was successful.
Starting MS services...
The STARTUP of MS services was successful.
[root@uaexacell1 rc3.d]#
TO restart the cell software,
[root@uaexacell1 rc3.d]# ./S99celld restart
Stopping the RS, CELLSRV, and MS services...
The SHUTDOWN of services was successful.
Starting the RS, CELLSRV, and MS services...
Getting the state of RS services... running
Starting CELLSRV services...
The STARTUP of CELLSRV services was successful.
Starting MS services...
The STARTUP of MS services was successful.
[root@uaexacell1 rc3.d]#
Cell software services will be managed using celladmin user and cellcli utility. You can
also start,stop,restart the services using cellcli utility.We will see the cellcli in next
article.
Hope this article give you the overview of the exadata storage cell processes.
Exadata – CELLCLI command Line Utility1
Exadata storage is managed by CELLCLI command line utility. Management process
(MS) will communicate with cellcli to maintain the configuration on the system. CELLCLI
utility can be launched by user “celladmin” or “root”. In this article ,we will see how to
list the storage objects and how to stop/start the cell services using the CELLCLI
utility.At the end of the article, we will see how to use the help command to form the
command syntax.
1. Login to Exadata storage cell using celladmin user and start cellcli utility.
[celladmin@uaexacell1 ~]$ id
uid=1000(celladmin) gid=500(celladmin) groups=500(celladmin),502(cellusers)
[celladmin@uaexacell1 ~]$ cellcli
CellCLI: Release 11.2.3.2.1 - Production on Sun Nov 16 16:05:27 GMT+05:30 2014
Copyright (c) 2007, 2012, Oracle. All rights reserved.
Cell Efficiency Ratio: 1
CellCLI>
Note:CELLCLI is case in-sensitive. So you can use the both upper & lower case.
2. List the cell information. (Exadata storage box)
CellCLI> list cell
uaexacell1
online
CellCLI> list cell detail
name:
uaexacell1
bbuTempThreshold:
60
bbuChargeThreshold:
800
bmcType:
absent
cellVersion:
OSS_11.2.3.2.1_LINUX.X64_130109
cpuCount:
1
diagHistoryDays:
7
fanCount:
1/1
fanStatus:
normal
flashCacheMode:
id:
WriteThrough
a3c87541-4d0e-478a-9ec9-8a4bea3eeaac
interconnectCount:
2
interconnect1:
eth1
iormBoost:
0.0
ipaddress1:
192.168.1.5/24
kernelVersion:
2.6.39-300.26.1.el5uek
makeModel:
Fake hardware
metricHistoryDays:
7
offloadEfficiency:
1.0
powerCount:
1/1
powerStatus:
normal
releaseVersion:
11.2.3.2.1
releaseTrackingBug:
status:
online
temperatureReading:
temperatureStatus:
upTime:
14522699
0.0
normal
0 days, 2:24
cellsrvStatus:
running
msStatus:
running
rsStatus:
running
CellCLI>
3. List the available storage devices on the system.It will list both harddrives and flash
disks.
CellCLI> LIST LUN
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK00
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK01
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK01
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK02
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK02
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK03
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK03
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK04
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK04
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK05
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK05
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK06
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK06
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK07
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK07
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK08
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK08
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK09
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK09
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK10
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK10
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK11
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK11
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK12
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK12
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK13
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK13
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH00 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH01
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH01 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH02
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH02 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH03
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH03 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH04
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH04 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH05
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH05 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH06
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH06 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH07
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH07 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH08
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH08 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH09
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH09 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH10
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH10 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH11
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH11 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH12
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH12 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH13
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH13 normal
CellCLI>
My Exdata storage is running on virtual hardware. That’s why you are seeing the
storage devices are listing with full path. In real hardware, You will be just seeing the
controller number and disks numbers. (Ex: 0_0 0_0 normal). Note: Exadata VM will be
used by oracle only for training purposes.
4. The below command will list only the harddisks attached to the exadata server.
CellCLI> list lun where disktype=harddisk
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK00
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK01
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK01
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK02
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK02
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK03
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK03
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK04
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK04
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK05
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK05
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK06
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK06
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK07
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK07
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK08
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK08
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK09
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK09
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK10
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK10
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK11
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK11
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK12
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK12
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK13
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK13
normal
CellCLI>
5.The below command list only the flash devices which are attached to the exadata
storage server.
CellCLI> list lun where disktype=flashdisk
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH00 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH01
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH01 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH02
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH02 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH03
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH03 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH04
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH04 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH05
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH05 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH06
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH06 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH07
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH07 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH08
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH08 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH09
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH09 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH10
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH10 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH11
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH11 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH12
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH12 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH13
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH13 normal
CellCLI>
6.List the celldisks
CellCLI> list celldisk
CD_DISK00_uaexacell1
normal
CD_DISK01_uaexacell1
normal
CD_DISK02_uaexacell1
normal
CD_DISK03_uaexacell1
normal
CD_DISK04_uaexacell1
normal
CD_DISK05_uaexacell1
normal
CD_DISK06_uaexacell1
normal
CD_DISK07_uaexacell1
normal
CD_DISK08_uaexacell1
normal
CD_DISK09_uaexacell1
normal
CD_DISK10_uaexacell1
normal
CD_DISK11_uaexacell1
normal
CD_DISK12_uaexacell1
normal
CD_DISK13_uaexacell1
normal
FD_00_uaexacell1
normal
FD_01_uaexacell1
normal
FD_02_uaexacell1
normal
FD_03_uaexacell1
normal
FD_04_uaexacell1
normal
FD_05_uaexacell1
normal
FD_06_uaexacell1
normal
FD_07_uaexacell1
normal
FD_08_uaexacell1
normal
FD_09_uaexacell1
normal
FD_10_uaexacell1
normal
FD_11_uaexacell1
normal
FD_12_uaexacell1
normal
FD_13_uaexacell1
normal
CellCLI>
7. List grid disks
CellCLI> list griddisk
DATA01_CD_DISK00_uaexacell1
active
DATA01_CD_DISK01_uaexacell1
active
DATA01_CD_DISK02_uaexacell1
active
DATA01_CD_DISK03_uaexacell1
active
DATA01_CD_DISK04_uaexacell1
active
DATA01_CD_DISK05_uaexacell1
active
DATA01_CD_DISK06_uaexacell1
active
DATA01_CD_DISK07_uaexacell1
active
DATA01_CD_DISK08_uaexacell1
active
DATA01_CD_DISK09_uaexacell1
active
DATA01_CD_DISK10_uaexacell1
active
DATA01_CD_DISK11_uaexacell1
active
DATA01_CD_DISK12_uaexacell1
active
DATA01_CD_DISK13_uaexacell1
active
CellCLI>
8. List the flash disks which are configured as flashcache.
CellCLI> list flashcache detail
name:
uaexacell1_FLASHCACHE
cellDisk:
FD_05_uaexacell1,FD_02_uaexacell1,FD_04_uaexacell1,FD_03_uaexacell1,FD_01_uaexacell1,FD_12_uaexacel
l1
creationTime:
2014-11-16T18:57:54+05:30
degradedCelldisks:
effectiveCacheSize:
4.3125G
id:
size:
status:
f972c16a-5fcc-4cc7-8083-a06b026f662b
4.3125G
normal
CellCLI>
9.List the flashcache which are configured as flashlog.
CellCLI> list flashlog detail
name:
uaexacell1_FLASHLOG
cellDisk:
FD_13_uaexacell1
creationTime:
2014-11-16T16:31:23+05:30
degradedCelldisks:
effectiveSize:
efficiency:
id:
size:
status:
512M
100.0
1fbc893b-4ab1-4861-b6cc-0b86bd45376d
512M
normal
CellCLI>
10.List only the status of the RS,MS and CELLSRV status.
CellCLI> list cell attributes rsStatus, msStatus, cellsrvStatus detail
rsStatus:
running
msStatus:
running
cellsrvStatus:
running
11. To stop the services using CELLCLI,
CellCLI> alter cell shutdown services all
Stopping the RS, CELLSRV, and MS services...
The SHUTDOWN of services was successful.
CellCLI>
12.To start the services using CELLCLI,
CellCLI> alter cell startup services all
Starting the RS, CELLSRV, and MS services...
Getting the state of RS services... running
Starting CELLSRV services...
The STARTUP of CELLSRV services was successful.
Starting MS services...
The STARTUP of MS services was successful.
CellCLI>
13.To restart the services forcefully using CELLCLI,
CellCLI> alter cell restart services all force
Stopping the RS, CELLSRV, and MS services...
The SHUTDOWN of services was successful.
Starting the RS, CELLSRV, and MS services...
Getting the state of RS services... running
Starting CELLSRV services...
The STARTUP of CELLSRV services was successful.
Starting MS services...
The STARTUP of MS services was successful.
CellCLI>
The same way you can shutdown the services forcefully by swapping the “restart”
command with “shutdown”
14. How to get the command syntax help in Exadata CELLCLI ?
Just execute the command “help” to get the list of commands.
CellCLI> help
HELP [topic]
Available Topics:
ALTER
ALTER ALERTHISTORY
ALTER CELL
ALTER CELLDISK
ALTER FLASHCACHE
ALTER GRIDDISK
ALTER IBPORT
ALTER IORMPLAN
ALTER LUN
ALTER PHYSICALDISK
ALTER QUARANTINE
ALTER THRESHOLD
ASSIGN KEY
CALIBRATE
CREATE
CREATE CELL
CREATE CELLDISK
CREATE FLASHCACHE
CREATE FLASHLOG
CREATE GRIDDISK
CREATE KEY
CREATE QUARANTINE
CREATE THRESHOLD
DESCRIBE
DROP
DROP ALERTHISTORY
DROP CELL
DROP CELLDISK
DROP FLASHCACHE
DROP FLASHLOG
DROP GRIDDISK
DROP QUARANTINE
DROP THRESHOLD
EXPORT CELLDISK
IMPORT CELLDISK
LIST
LIST ACTIVEREQUEST
LIST ALERTDEFINITION
LIST ALERTHISTORY
LIST CELL
LIST CELLDISK
LIST FLASHCACHE
LIST FLASHCACHECONTENT
LIST FLASHLOG
LIST GRIDDISK
LIST IBPORT
LIST IORMPLAN
LIST KEY
LIST LUN
LIST METRICCURRENT
LIST METRICDEFINITION
LIST METRICHISTORY
LIST PHYSICALDISK
LIST QUARANTINE
LIST THRESHOLD
SET
SPOOL
START
CellCLI>
15. To get the help for specific topic,use HELP <TOPIC> command.
CellCLI> HELP LIST
Enter HELP LIST <object_type> for specific help syntax.
<object_type>: {ACTIVEREQUEST | ALERTHISTORY | ALERTDEFINITION | CELL
| CELLDISK | FLASHCACHE | FLASHLOG | FLASHCACHECONTENT | GRIDDISK
| IBPORT | IORMPLAN | KEY | LUN
| METRICCURRENT | METRICDEFINITION | METRICHISTORY
| PHYSICALDISK | QUARANTINE | THRESHOLD }
CellCLI>
16.To get the help of specific command,use the below syntax,
CellCLI> HELP LIST CELLDISK
Usage: LIST CELLDISK [ | ] [<attribute_list>] [DETAIL]
Purpose: Displays specified attributes for cell disks.
Arguments:
: The name of the cell disk to be displayed.
: an expression which determines which cell disks should
be displayed.
<attribute_list>: The attributes that are to be displayed.
ATTRIBUTES {ALL | attr1 [, attr2]... }
Options:
[DETAIL]: Formats the display as an attribute on each line, with
an attribute descriptor preceding each value.
Examples:
LIST CELLDISK cd1 DETAIL
LIST CELLDISK where freespace > 100M
CellCLI>
You can check the exadata storage cell logs using the below command,
CellCLI> list alerthistory
1_1
2014-11-15T01:17:14+05:30
critical
"File system "/" is 84% full, which is above the 80%
threshold. Accelerated space reclamation has started. This alert will be cleared when file system "/"
becomes less than 75% full. Top three directories ordered by total space usage are as follows: /usr
2.35G /tmp
1_2
: 1.37G /opt
:
: 593.27M"
2014-11-15T01:25:44+05:30
critical
"File system "/" is 84% full, which is above the 80%
threshold. Accelerated space reclamation has started. This alert will be cleared when file system "/"
becomes less than 75% full. Top three directories ordered by total space usage are as follows: /usr
2.35G /tmp
1_3
: 1.37G /opt
:
: 593.36M"
2014-11-15T01:36:51+05:30
critical
"File system "/" is 84% full, which is above the 80%
threshold. Accelerated space reclamation has started. This alert will be cleared when file system "/"
becomes less than 75% full. Top three directories ordered by total space usage are as follows: /usr
2.35G /tmp
1_4
: 1.37G /opt
:
: 593.38M"
2014-11-15T01:44:27+05:30
critical
"File system "/" is 84% full, which is above the 80%
threshold. Accelerated space reclamation has started. This alert will be cleared when file system "/"
becomes less than 75% full. Top three directories ordered by total space usage are as follows: /usr
2.35G /tmp
1_5
: 1.37G /opt
:
: 593.39M"
2014-11-16T15:00:21+05:30
clear
"File system "/" is 62% full, which is below the 75%
threshold. Normal space reclamation will resume."
2
2014-11-16T14:47:28+05:30
critical
"RS-7445 [Serv CELLSRV hang detected] [It will be
critical
"RS-7445 [Serv MS is absent] [It will be restarted] []
critical
"ORA-07445: exception encountered: core dump
restarted] [] [] [] [] [] [] [] [] [] []"
3
2014-11-16T15:07:05+05:30
[] [] [] [] [] [] [] [] []"
4
2014-11-16T16:31:51+05:30
[_ZN14FlashCacheCore20fcDeleteMemoryForFciEj()+45] [11] [0x000000000] [] [] []"
5
2014-11-16T16:32:57+05:30
critical
"ORA-07445: exception encountered: core dump
[_ZN14FlashCacheCore20fcDeleteMemoryForFciEj()+45] [11] [0x000000000] [] [] []"
6
2014-11-16T16:34:42+05:30
critical
"ORA-07445: exception encountered: core dump
[_ZN14FlashCacheCore20fcDeleteMemoryForFciEj()+45] [11] [0x000000000] [] [] []"
7
2014-11-16T16:36:15+05:30
critical
"ORA-07445: exception encountered: core dump
[_ZN14FlashCacheCore20fcDeleteMemoryForFciEj()+45] [11] [0x000000000] [] [] []"
8
2014-11-16T16:44:28+05:30
critical
"ORA-07445: exception encountered: core dump
[_ZN14FlashCacheCore20fcDeleteMemoryForFciEj()+45] [11] [0x000000000] [] [] []"
9
2014-11-16T16:49:00+05:30
critical
"ORA-07445: exception encountered: core dump
[_ZN14FlashCacheCore20fcDeleteMemoryForFciEj()+45] [11] [0x000000000] [] [] []"
10
2014-11-16T16:52:32+05:30
critical
"ORA-07445: exception encountered: core dump
[_ZN14FlashCacheCore20fcDeleteMemoryForFciEj()+45] [11] [0x000000000] [] [] []"
11
2014-11-16T16:58:42+05:30
critical
"ORA-07445: exception encountered: core dump
[_ZN14FlashCacheCore20fcDeleteMemoryForFciEj()+45] [11] [0x000000000] [] [] []"
12
2014-11-16T16:59:48+05:30
critical
"RS-7445 [CELLSRV monitor disabled] [Detected a
critical
"ORA-07445: exception encountered: core dump
flood of restarts] [] [] [] [] [] [] [] [] [] []"
13
2014-11-16T17:07:04+05:30
[_ZN14FlashCacheCore20fcDeleteMemoryForFciEj()+45] [11] [0x000000000] [] [] []"
14
2014-11-16T18:31:17+05:30
critical
"ORA-07445: exception encountered: core dump
[_ZN14FlashCacheCore20fcDeleteMemoryForFciEj()+45] [11] [0x000000000] [] [] []"
CellCLI>
Exadata Storage Cell – Administrating the
Disks
Exadata Storage server uses the cell software to manage the disks. Like volume
manager, we need to build couple of virtual layers to get the grid disks.These griddisk
will be used to create the ASM disk group on the database level . In this article, we will
see that how we can create/delete the celldisk, griddisk,flashcache & flashlog using the
cellcli utility as well as Linux command line.As i said earlier, we can also use flash disk
to create the griddisks for high write intensive databases. But in most of the cases, we
will be using those flash disks for flashcache and flashlog purposes due to the storage
limitation.
Exadata Storage Architecture
The below diagram will explain that how the virtual storage objects are built on exadata
storage server .
Exadata storage cell disks
1. Login to the exadata storage server celladmin and start cellcli utility.
[celladmin@uaexacell1 ~]$ id
uid=1000(celladmin) gid=500(celladmin) groups=500(celladmin),502(cellusers)
[celladmin@uaexacell1 ~]$ cellcli
CellCLI: Release 11.2.3.2.1 - Production on Sun Nov 16 22:19:23 GMT+05:30 2014
Copyright (c) 2007, 2012, Oracle. All rights reserved.
Cell Efficiency Ratio: 1
CellCLI>
2.List the physical disks. It lists all the attached harddisks and flash drives.
CellCLI> list physicaldisk
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK00
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK01
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK01
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK02
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK02
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK03
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK03
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK04
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK04
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK05
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK05
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK06
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK06
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK07
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK07
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK08
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK08
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK09
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK09
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK10
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK10
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK11
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK11
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK12
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK12
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK13
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK13
normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH00 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH01
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH01 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH02
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH02 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH03
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH03 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH04
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH04 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH05
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH05 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH06
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH06 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH07
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH07 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH08
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH08 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH09
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH09 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH10
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH10 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH11
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH11 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH12
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH12 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH13
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH13 normal
CellCLI>
3.Check the existing celldisks.
CellCLI> LIST CELLDISK
CellCLI>
4. Create the celldisks on all disks. (we normally do this)
CellCLI> CREATE CELLDISK ALL
CellDisk CD_DISK00_uaexacell1 successfully created
CellDisk CD_DISK01_uaexacell1 successfully created
CellDisk CD_DISK02_uaexacell1 successfully created
CellDisk CD_DISK03_uaexacell1 successfully created
CellDisk CD_DISK04_uaexacell1 successfully created
CellDisk CD_DISK05_uaexacell1 successfully created
CellDisk CD_DISK06_uaexacell1 successfully created
CellDisk CD_DISK07_uaexacell1 successfully created
CellDisk CD_DISK08_uaexacell1 successfully created
CellDisk CD_DISK09_uaexacell1 successfully created
CellDisk CD_DISK10_uaexacell1 successfully created
CellDisk CD_DISK11_uaexacell1 successfully created
CellDisk CD_DISK12_uaexacell1 successfully created
CellDisk CD_DISK13_uaexacell1 successfully created
CellDisk FD_00_uaexacell1 successfully created
CellDisk FD_01_uaexacell1 successfully created
CellDisk FD_02_uaexacell1 successfully created
CellDisk FD_03_uaexacell1 successfully created
CellDisk FD_04_uaexacell1 successfully created
CellDisk FD_05_uaexacell1 successfully created
CellDisk FD_06_uaexacell1 successfully created
CellDisk FD_07_uaexacell1 successfully created
CellDisk FD_08_uaexacell1 successfully created
CellDisk FD_09_uaexacell1 successfully created
CellDisk FD_10_uaexacell1 successfully created
CellDisk FD_11_uaexacell1 successfully created
CellDisk FD_12_uaexacell1 successfully created
CellDisk FD_13_uaexacell1 successfully created
CellCLI> LIST CELLDISK
CD_DISK00_uaexacell1
normal
CD_DISK01_uaexacell1
normal
CD_DISK02_uaexacell1
normal
CD_DISK03_uaexacell1
normal
CD_DISK04_uaexacell1
normal
CD_DISK05_uaexacell1
normal
CD_DISK06_uaexacell1
normal
CD_DISK07_uaexacell1
normal
CD_DISK08_uaexacell1
normal
CD_DISK09_uaexacell1
normal
CD_DISK10_uaexacell1
normal
CD_DISK11_uaexacell1
normal
CD_DISK12_uaexacell1
normal
CD_DISK13_uaexacell1
normal
FD_00_uaexacell1
normal
FD_01_uaexacell1
normal
FD_02_uaexacell1
normal
FD_03_uaexacell1
normal
FD_04_uaexacell1
normal
FD_05_uaexacell1
normal
FD_06_uaexacell1
normal
FD_07_uaexacell1
normal
FD_08_uaexacell1
normal
FD_09_uaexacell1
normal
FD_10_uaexacell1
normal
FD_11_uaexacell1
normal
FD_12_uaexacell1
normal
FD_13_uaexacell1
normal
CellCLI>
We have successfully created the celldisks on all the harddisks and flashdisks. This is
one time activity and you no need to perform celldisk creation unless you replace any
faulty drives.
5.To create the griddisk on all the harddisks , use the below command.
CellCLI> create griddisk ALL HARDDISK PREFIX=CD_DISK
GridDisk CD_DISK_CD_DISK00_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK01_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK02_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK03_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK04_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK05_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK06_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK07_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK08_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK09_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK10_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK11_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK12_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK13_uaexacell1 successfully created
CellCLI>
6.If you want to create the griddisk with specific size & name, use the below syntax,
CellCLI> CREATE GRIDDISK DATA01_DG celldisk = CD_DISK00_uaexacell1, size =100M
GridDisk DATA01_DG successfully created
CellCLI> list griddisk
DATA01_DG
active
CellCLI> list griddisk detail
name:
DATA01_DG
availableTo:
cachingPolicy:
cellDisk:
default
CD_DISK00_uaexacell1
comment:
creationTime:
diskType:
errorCount:
id:
2014-11-16T22:27:50+05:30
HardDisk
0
d681708b-9717-41fc-afad-78d61ca2f476
offset:
48M
size:
96M
status:
CellCLI>
active
If you are having the Exadata quarter rack , you need to create the same size grid disks
on all the exadata storage cells. Oracle ASM will mirror across all the cell nodes for
redundancy. When Database requires the additional space , its highly recommended to
create the griddisk with existing griddisk size.
7.How to delete the griddisk ? Drop (delete) the specific griddisk using the below syntax
CellCLI> list griddisk DATA01_DG
DATA01_DG
active
CellCLI> drop griddisk DATA01_DG
GridDisk DATA01_DG successfully dropped
CellCLI> list griddisk DATA01_DG
CELL-02007: Grid disk does not exist: DATA01_DG
CellCLI>
8.You can also drop the bunch of grid disks using the prefix.Please see the below
syntax.
CellCLI> list griddisk
CD_DISK_CD_DISK00_uaexacell1
active
CD_DISK_CD_DISK01_uaexacell1
active
CD_DISK_CD_DISK02_uaexacell1
active
CD_DISK_CD_DISK03_uaexacell1
active
CD_DISK_CD_DISK04_uaexacell1
active
CD_DISK_CD_DISK05_uaexacell1
active
CD_DISK_CD_DISK06_uaexacell1
active
CD_DISK_CD_DISK07_uaexacell1
active
CD_DISK_CD_DISK08_uaexacell1
active
CD_DISK_CD_DISK09_uaexacell1
active
CD_DISK_CD_DISK10_uaexacell1
active
CD_DISK_CD_DISK11_uaexacell1
active
CD_DISK_CD_DISK12_uaexacell1
active
CD_DISK_CD_DISK13_uaexacell1
active
CellCLI> drop griddisk all prefix=CD_DISK
GridDisk CD_DISK_CD_DISK00_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK01_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK02_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK03_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK04_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK05_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK06_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK07_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK08_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK09_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK10_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK11_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK12_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK13_uaexacell1 successfully dropped
CellCLI>
The above command deletes the griddisk which name starts from “CD_DISK”.
9. How to drop specific celldisk ? Drop the specific celldisk using the below syntax.
CellCLI> list celldisk CD_DISK00_uaexacell1
CD_DISK00_uaexacell1
normal
CellCLI> drop celldisk CD_DISK00_uaexacell1
CellDisk CD_DISK00_uaexacell1 successfully dropped
CellCLI> list celldisk CD_DISK00_uaexacell1
CELL-02525: Unknown cell disk: CD_DISK00_uaexacell1
CellCLI>
Playing the Flashdisks
1. List the flashdisks
CellCLI> LIST CELLDISK where disktype=flashdisk
FD_00_uaexacell1
normal
FD_01_uaexacell1
normal
FD_02_uaexacell1
normal
FD_03_uaexacell1
normal
FD_04_uaexacell1
normal
FD_05_uaexacell1
normal
FD_06_uaexacell1
normal
FD_07_uaexacell1
normal
FD_08_uaexacell1
normal
FD_09_uaexacell1
normal
FD_10_uaexacell1
normal
FD_11_uaexacell1
normal
FD_12_uaexacell1
normal
FD_13_uaexacell1
normal
CellCLI>
Flashdisks will commonly used to create the flashcache and flashlog.
Exadata Flashdisk
2.Configuring specific flashdisks as flashlog.
CellCLI> CREATE FLASHLOG celldisk='FD_00_uaexacell1,FD_01_uaexacell1' , SIZE=100M
Flash log uaexacell1_FLASHLOG successfully created
CellCLI> LIST FLASHLOG
uaexacell1_FLASHLOG
normal
CellCLI> LIST FLASHLOG DETAIL
name:
uaexacell1_FLASHLOG
cellDisk:
FD_00_uaexacell1,FD_01_uaexacell1
creationTime:
2014-11-16T23:02:50+05:30
degradedCelldisks:
effectiveSize:
efficiency:
id:
size:
status:
96M
100.0
a12265f9-f80b-491b-a0e5-518b2143eede
96M
normal
CellCLI>
3.Configuring flashcache on specific flashdisks.
CellCLI> CREATE FLASHCACHE celldisk='FD_03_uaexacell1,FD_04_uaexacell1' , SIZE=100M
Flash cache uaexacell1_FLASHCACHE successfully created
CellCLI> LIST FLASHCACHE
uaexacell1_FLASHCACHE normal
CellCLI> LIST FLASHCACHE DETAIL
name:
uaexacell1_FLASHCACHE
cellDisk:
FD_04_uaexacell1,FD_03_uaexacell1
creationTime:
2014-11-16T23:04:50+05:30
degradedCelldisks:
effectiveCacheSize:
id:
size:
status:
96M
fe936779-abfc-4b70-a0d0-5146523cef48
96M
normal
CellCLI>
4.Deleting the flashlog.
CellCLI> DROP FLASHLOG
Flash log uaexacell1_FLASHLOG successfully dropped
CellCLI> LIST FLASHLOG
CellCLI>
5.Deleting the flashcache.
CellCLI> LIST FLASHCACHE
uaexacell1_FLASHCACHE normal
CellCLI> DROP FLASHCACHE
Flash cache uaexacell1_FLASHCACHE successfully dropped
CellCLI> LIST FLASHCACHE
CellCLI>
We need to invoke cellcli utility to manage the virtual storage objects. Is it possible
manage the storage from command line ? Yes. You can manage the storage from linux
command line. The below example will show that all the cellcli commands can be
executed from the command line.you need to provide the command along with “cellcli
-e” .
[celladmin@uaexacell1 ~]$ cellcli -e create griddisk all harddisk prefix=UADB
GridDisk UADB_CD_DISK01_uaexacell1 successfully created
GridDisk UADB_CD_DISK02_uaexacell1 successfully created
GridDisk UADB_CD_DISK03_uaexacell1 successfully created
GridDisk UADB_CD_DISK04_uaexacell1 successfully created
GridDisk UADB_CD_DISK05_uaexacell1 successfully created
GridDisk UADB_CD_DISK06_uaexacell1 successfully created
GridDisk UADB_CD_DISK07_uaexacell1 successfully created
GridDisk UADB_CD_DISK08_uaexacell1 successfully created
GridDisk UADB_CD_DISK09_uaexacell1 successfully created
GridDisk UADB_CD_DISK10_uaexacell1 successfully created
GridDisk UADB_CD_DISK11_uaexacell1 successfully created
GridDisk UADB_CD_DISK12_uaexacell1 successfully created
GridDisk UADB_CD_DISK13_uaexacell1 successfully created
[celladmin@uaexacell1 ~]$ cellcli -e list griddisk where disktype=harddisk
UADB_CD_DISK01_uaexacell1
active
UADB_CD_DISK02_uaexacell1
active
UADB_CD_DISK03_uaexacell1
active
UADB_CD_DISK04_uaexacell1
active
UADB_CD_DISK05_uaexacell1
active
UADB_CD_DISK06_uaexacell1
active
UADB_CD_DISK07_uaexacell1
active
UADB_CD_DISK08_uaexacell1
active
UADB_CD_DISK09_uaexacell1
active
UADB_CD_DISK10_uaexacell1
active
UADB_CD_DISK11_uaexacell1
active
UADB_CD_DISK12_uaexacell1
active
UADB_CD_DISK13_uaexacell1
active
[celladmin@uaexacell1 ~]$
Exadata – Distributed Command-Line Utility
(dcli)
Distributed command line utility(dcli) provides an option to execute the monitoring and
administration commands on multiple servers simultaneously.In exadata database
machine , you may need to create the griddisks on all the exadata storage cells
frequently. Each time , you need to login to all the storage cells and create the griddisk
manually.But dcli will make our life easier once you configured the all the storage cells
on any one of the storage cell or on the database node. In this article ,we will see how
to configure the dcli on multiple storage cells.
It’s good to configure the dcli on the database server. So that you no need to login to
exadata storage cells for each grid disk creation/drop.
1. Login to the database server or any one of the exadata storage cell.Make sure all the
exadata stroage cells has been added to the /etc/hosts file.
[root@uaexacell1 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1
localhost.localdomain localhost
192.168.2.50
uaexacell1
192.168.2.51
uaexacell2
192.168.2.52
uaexacell3
[root@uaexacell1 ~]#
2. Create the file with all the exadata storage cell .
[root@uaexacell1 ~]# cat << END >> exacells
> uaexacell1
> uaexacell2
> uaexacell3
> END
[root@uaexacell1 ~]#
[root@uaexacell1 ~]# cat exacells
uaexacell1
uaexacell2
uaexacell3
[root@uaexacell1 ~]#
3.Create the ssh key for the host.
[root@uaexacell1 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
15:ac:fb:66:8b:5f:32:09:dd:b9:e7:ca:6c:ef:6b:b4 root@uaexacell1
[root@uaexacell1 ~]#
4.Execute the below command to make the password less login for all the hosts which
we have added in exacells file. DCLI Utility configures the password less authentication
across the nodes using ssh .
[root@uaexacell1 ~]# dcli -g exacells -k
The authenticity of host 'uaexacell1 (192.168.2.50)' can't be established.
RSA key fingerprint is e6:e9:4f:d1:a0:05:eb:38:d5:bf:5b:fb:2a:5f:2c:b7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'uaexacell1,192.168.2.50' (RSA) to the list of known hosts.
celladmin@uaexacell1's password:
celladmin@uaexacell2's password:
celladmin@uaexacell3's password:
uaexacell1: ssh key added
uaexacell2: ssh key added
uaexacell3: ssh key added
[root@uaexacell1 ~]#
We have successfully configured the dcli utility on all the exadata storage cells. Now we
can monitor & administrate cells nodes from the current host.
5.Let me check the status of the all exadata cells.
[root@uaexacell1 ~]# dcli -g exacells cellcli -e list cell
uaexacell1: uaexacell1 online
uaexacell2: uaexacell1 online
uaexacell3: uaexacell1 online
[root@uaexacell1 ~]#
6.Create the griddisk on all the exadata storage node using the dcli utility.
[root@uaexacell1 ~]# dcli -g exacells cellcli -e list celldisk where disktype=harddisk
uaexacell1: CD_DISK01_uaexacell1
normal
uaexacell1: CD_DISK02_uaexacell1
normal
uaexacell1: CD_DISK03_uaexacell1
normal
uaexacell1: CD_DISK04_uaexacell1
normal
uaexacell1: CD_DISK05_uaexacell1
normal
uaexacell1: CD_DISK06_uaexacell1
normal
uaexacell1: CD_DISK07_uaexacell1
normal
uaexacell1: CD_DISK08_uaexacell1
normal
uaexacell1: CD_DISK09_uaexacell1
normal
uaexacell1: CD_DISK10_uaexacell1
normal
uaexacell1: CD_DISK11_uaexacell1
normal
uaexacell1: CD_DISK12_uaexacell1
normal
uaexacell1: CD_DISK13_uaexacell1
normal
uaexacell2: CD_DISK01_uaexacell1
normal
uaexacell2: CD_DISK02_uaexacell1
normal
uaexacell2: CD_DISK03_uaexacell1
normal
uaexacell2: CD_DISK04_uaexacell1
normal
uaexacell2: CD_DISK05_uaexacell1
normal
uaexacell2: CD_DISK06_uaexacell1
normal
uaexacell2: CD_DISK07_uaexacell1
normal
uaexacell2: CD_DISK08_uaexacell1
normal
uaexacell2: CD_DISK09_uaexacell1
normal
uaexacell2: CD_DISK10_uaexacell1
normal
uaexacell2: CD_DISK11_uaexacell1
normal
uaexacell2: CD_DISK12_uaexacell1
normal
uaexacell2: CD_DISK13_uaexacell1
normal
uaexacell3: CD_DISK01_uaexacell1
normal
uaexacell3: CD_DISK02_uaexacell1
normal
uaexacell3: CD_DISK03_uaexacell1
normal
uaexacell3: CD_DISK04_uaexacell1
normal
uaexacell3: CD_DISK05_uaexacell1
normal
uaexacell3: CD_DISK06_uaexacell1
normal
uaexacell3: CD_DISK07_uaexacell1
normal
uaexacell3: CD_DISK08_uaexacell1
normal
uaexacell3: CD_DISK09_uaexacell1
normal
uaexacell3: CD_DISK10_uaexacell1
normal
uaexacell3: CD_DISK11_uaexacell1
normal
uaexacell3: CD_DISK12_uaexacell1
normal
uaexacell3: CD_DISK13_uaexacell1
normal
[root@uaexacell1 ~]#
[root@uaexacell1 ~]# dcli -g exacells cellcli -e create griddisk HRDB celldisk=CD_DISK01_uaexacell1,
size=100M
uaexacell1: GridDisk HRDB successfully created
uaexacell2: GridDisk HRDB successfully created
uaexacell3: GridDisk HRDB successfully created
[root@uaexacell1 ~]#
[root@uaexacell1 ~]# dcli -g exacells cellcli -e list griddisk HRDB detail
uaexacell1: name:
HRDB
uaexacell1: availableTo:
uaexacell1: cachingPolicy:
uaexacell1: cellDisk:
default
CD_DISK01_uaexacell1
uaexacell1: comment:
uaexacell1: creationTime:
2014-11-17T15:46:43+05:30
uaexacell1: diskType:
HardDisk
uaexacell1: errorCount:
uaexacell1: id:
0
3bf213a3-dafc-41b7-b133-5580dd04c334
uaexacell1: offset:
48M
uaexacell1: size:
96M
uaexacell1: status:
active
uaexacell2: name:
HRDB
uaexacell2: availableTo:
uaexacell2: cachingPolicy:
uaexacell2: cellDisk:
default
CD_DISK01_uaexacell1
uaexacell2: comment:
uaexacell2: creationTime:
uaexacell2: diskType:
uaexacell2: errorCount:
uaexacell2: id:
2014-11-17T15:46:43+05:30
HardDisk
0
21014da6-6e17-4ca1-a7dc-cc059bd75654
uaexacell2: offset:
48M
uaexacell2: size:
96M
uaexacell2: status:
active
uaexacell3: name:
HRDB
uaexacell3: availableTo:
uaexacell3: cachingPolicy:
uaexacell3: cellDisk:
default
CD_DISK01_uaexacell1
uaexacell3: comment:
uaexacell3: creationTime:
uaexacell3: diskType:
uaexacell3: errorCount:
uaexacell3: id:
2014-11-17T15:46:43+05:30
HardDisk
0
3821ce2c-4376-4674-8cb4-6c8868b5b1f9
uaexacell3: offset:
48M
uaexacell3: size:
96M
uaexacell3: status:
active
[root@uaexacell1 ~]#
You can also use the dcli command without having the hosts file.
[root@uaexacell1 ~]# dcli -c uaexacell1,uaexacell2,uaexacell3 cellcli -e drop griddisk HRDB
uaexacell1: GridDisk HRDB successfully dropped
uaexacell2: GridDisk HRDB successfully dropped
uaexacell3: GridDisk HRDB successfully dropped
[root@uaexacell1 ~]#
Exadata Storage Cell Commands Cheat Sheet
It is not an easy to remember the commands since most of the UNIX administrators are
working on multiple Operating systems and different OS flavors. Exadata and ZFS
appliance are adding additional responsibility to Unix administrator and need to
remember those appliance commands as well. This article will provide the reference to
all Exadata storage cell commands with examples for some complex command
options.
All the below mentioned commands will work only on cellcli prompt.
Listing the Exadata Storage cell Objects (LIST)
Command
Description
Examples
cellcli
To Manage the Exadata cell
Storage
[root@uaexacell1 init.d]# c
CellCLI: Release 11.2.3.2.1
02:16:03 GMT+05:30 2014
Copyright (c) 2007, 2012, O
Cell Efficiency Ratio: 1CellC
LIST CELL
List the Cell Status
CellCLI> LIST CELL
uaexacell1
online
CellCLI>
LIST LUN
To list all the physical Drive & Flash
drives
LIST PHYSICALDISK
To list all the physical Drive & Flash
drives
LIST LUN where celldisk =
<celldisk>
To list the LUN which is mapped to
specific disk
CellCLI> LIST LUN where ce
FLASH13 FLASH13 norma
LIST CELL DETAIL
List the cell Status with all
attributes
CellCLI> LIST CELL DETAIL
name:
uaexacel
bbuTempThreshold:
60
bbuChargeThreshold:
800
bmcType:
absent
LIST CELL attributes
<attribute>
To list the specific cell attributes
CellCLI> LIST CELL attribute
WriteThrough
List all the cell Disks
CellCLI> LIST CELLDISK
CD_DISK00_uaexacell1 no
CD_DISK01_uaexacell1 no
List all the cell Disks with Detailed
information
CellCLI> LIST CELLDISK det
name:
FD_13_ua
comment:
creationTime:
2014-1
deviceName:
0_0
devicePartition:
0_0
diskType:
FlashDisk
LIST CELLDISK <CELLDISK>
detail
TO list the Specific celldisk detail
CellCLI> LIST CELLDISK FD
name:
FD_00_ua
comment:
creationTime:
2014-1
LIST CELLDISK where
disktype=harddisk
To list the celldisk which are
created using harddisk
CellCLI> LIST CELLDISK
CD_DISK00_uaexacell1
CD_DISK01_uaexacell1
CD_DISK02_uaexacell1
LIST CELLDISK where
disktype=flashdisk
To list the celldisk which are
created using Flashdisk
CellCLI> LIST CELLDISK whe
FD_00_uaexacell1
norm
FD_01_uaexacell1
norm
FD_02_uaexacell1
norm
LIST CELLDISK where
freespace > SIZE
To list the celldisks which has more
than specificed size
CellCLI> LIST CELLDISK wh
FD_00_uaexacell1
norm
FD_01_uaexacell1
norm
LIST FLASHCACHE
To list the configured FLASHCACHE
LIST FLASHCACHE DETAIL
To list the configured FLASHCACHE
LIST CELLDISK
LIST CELLDISK DETAIL
whe
no
no
no
in detail
LIST FLASHLOG
To list the configured FLASHLOG
LIST FLASHLOG DETAIL
To list the configured FLASHLOG in
detail
LIST FLASHCACHECONTENT
To list the Flashcache content
LIST GRIDDISK
To list the griddisks
CellCLI> LIST GRIDDISK
DATA01_CD_DISK00_uaexac
DATA01_CD_DISK01_uaexac
LIST GRIDDISK DETAIL
To list the griddisks in detail
CellCLI> LIST GRIDDISK DE
name:
DATA01_C
availableTo:
cachingPolicy:
default
cellDisk:
CD_DISK0
LIST GRIDDISK
<GRIDDISK_NAME>
To list the specific Griddisk
CellCLI> LIST GRIDDISK DAT
DATA01_CD_DISK00_uaexac
LIST GRIDDISK
<GRIDDISK_NAME> detail
To list the specific Griddisk in detail
CellCLI> LIST GRIDDISK DA
detail
name:
DATA01_C
availableTo:
cachingPolicy:
default
cellDisk:
CD_DISK0
LIST GRIDDISK where size >
SIZE
To list the griddisk which size is
higher than specified value
CellCLI> LIST GRIDDISK wh
DATA01_CD_DISK00_uaexac
LIST IBPORT
To list the inifiniband Port
LIST IORMPLAN
To list the IORMPLAN
CellCLI> LIST IORMPLAN
uaexacell1_IORMPLAN
ac
To list the IORMPLAN in DETAIL
CellCLI> LIST IORMPLAN DE
name:
uaexacel
catPlan:
dbPlan:
objective:
basic
status:
active
LIST IORMPLAN DETAIL
To get the I/O’s second for all the
objects
CellCLI> LIST METRICCURR
CD_BY_FC_DIRTY
CD_DISK00_uaexacell1
CD_BY_FC_DIRTY
CD_DISK01_uaexacell1
CD_BY_FC_DIRTY
CD_DISK02_uaexacell1
CD_BY_FC_DIRTY
CD_DISK03_uaexacell1
LIST METRICCURRENT cl_cput,
cl_runq detail
To list the RUNQ
CellCLI> list metriccurrent c
name:
CL_CPUT
alertState:
normal
collectionTime:
2014-1
metricObjectName:
uae
metricType:
Instanta
metricValue:
4.7 %
objectType:
CELLnam
alertState:
normal
collectionTime:
2014-1
metricObjectName:
uae
metricType:
Instanta
metricValue:
12.2
objectType:
CELL
LIST QUARANTINE
To list the QUARANTINE disk
LIST QUARANTINE detail
To list the QUARANTINE disk in
detail
LIST METRICCURRENT
LIST THRESHOLD
To list the thersold limits
LIST THRESHOLD DETAIL
To list the thersold limits in detail
LIST ACTIVEREQUEST
To list the active Requests
LIST ALERTHISTORY
To list the alerts
CREATING the Exadata Storage cell Objects (CREATE)
The below commands will be used most commonly on exadata storage to create the
virtual objects.
CellCLI> CREATE CEL
interconnect1=eth1
Cell uaexacell1 succe
Starting CELLSRV ser
The STARTUP of CELL
Flash cell disks, Flash
created.
CREATE CELL <CELL_NAME>
interconnect1=<ethx>
Configures the cell network
CREATE CELLDISK <CELLDISK_NAME>
<LUN>
Creates cell disk(s) according
to attributes provided.
CREATE CELLDISK U
CREATE CELLDISK ALL HARDISK
Creates cell disk(s) on all the
harddisks
CellCLI> CREATE CEL
CellDisk CD_DISK00_
CellDisk CD_DISK01_
CellDisk CD_DISK02_
CREATE CELLDISK ALL
Creates cell disk(s) on all the
harddisks & flashdisks
CellCLI> CREATE CEL
CellDisk CD_DISK00_
CREATE CELLDISK ALL FLASHDISK
Creates cell disk(s) on all the
flashdisks
CellCLI> CREATE CEL
CellDisk FD_00_uaex
CREATE FLASHCACHE
celldisk=”Flash_celldisk1″
Creates flash cache for IO
requests on specific flashdisk
CellCLI> CREATE FLA
celldisk=”FD_00_uae
size=500M
CREATE FLASHCACHE ALL size = <size>
Creates flash cache for IO
requests on all devices
with specific size
CREATE FLASHCACHE
CREATE FLASHLOG
celldisk=”Flash_celldisk1″
Creates flash log for logging
requests on specified
flashdisk
CellCLI> CREATE FLA
celldisk=”FD_00_uae
size=500M
CREATE FLASHLOG ALL size = <size>
Creates flash log for logging
requests on all devices
with specific size
CREATE FLASHLOG A
CREATE GRIDDISK <GRIDDISK_NAME>
CELLDISK=<celldisk>
Creates grid disk on specific
disk
CellCLI> CREATE GRI
CELLDISK=CD_DISK0
GridDisk UADBDK1 s
CellCLI>
Creates grid disk on specific
disk with specific size
CellCLI> CREATE GRI
CELLDISK=CD_DISK0
GridDisk UADBDK2 s
CellCLI>
CREATE GRIDDISK ALL HARDDISK
PREFIX=<Disk_Name>, size=<size>
Create Grid disks on all the
harddisk with specific size.
CellCLI> CREATE GRI
PREFIX=UADBPROD,
Cell disks were skipp
freespace for grid dis
GridDisk UADBPROD
successfully created
GridDisk UADBPROD
successfully created
GridDisk UADBPROD
successfully created
GridDisk UADBPROD
successfully created
GridDisk UADBPROD
successfully created
GridDisk UADBPROD
successfully created
GridDisk UADBPROD
successfully created
GridDisk UADBPROD
successfully created
GridDisk UADBPROD
successfully created
GridDisk UADBPROD
successfully created
CREATE GRIDDISK ALL FLASHDISK
PREFIX=<Disk_Name>, size=<size>
Create Grid disks on all the
flashdisk with specific size.
CREATE GRIDDISK <GRIDDISK_NAME>
CELLDISK=<celldisk>, size=<size>
CellCLI> CREATE GRI
PREFIX=UAFLSHDB,
GridDisk UAFLSHDB_
created
GridDisk UAFLSHDB_
created
GridDisk UAFLSHDB_
created
GridDisk UAFLSHDB_
created
GridDisk UAFLSHDB_
created
GridDisk UAFLSHDB_
created
GridDisk UAFLSHDB_
created
GridDisk
created
GridDisk
created
GridDisk
created
GridDisk
created
UAFLSHDB_
UAFLSHDB_
UAFLSHDB_
UAFLSHDB_
CREATE KEY
Creates and displays random
key for use in assigning client
keys.
CellCLI> CREATE KEY
1820ef8f9c2bafcd12
CellCLI>
CREATE QUARANTINE
quarantineType=<“SQLID” or “DISK
REGION” or \
“SQL PLAN” or “CELL OFFLOAD”>
attributename=value
Define the attributes for a
new quarantine entity
CellCLI> CREATE QU
quarantineType=”SQ
Quarantine successfu
CellCLI>
Defines conditions for
generation of a metric alert.
CellCLI> CREATE THR
db_io_rq_sm_sec.db1
critical=120
Threshold db_io_rq_s
created
CellCLI>
CREATE THRESHOLD <Thersold1>
attributename=value
DELETING the Exadata Storage cell Objects (DROP)
The below mentioned cellcli commands will help you to remove the various objects on
the exadata storage cell. Be carefully with “force” option since it can remove the object
even though it is in use.
DROP ALERTHISTORY <ALER1>,
<ALERT2>
Removes the specific alert
from the cell’s alert history.
CellCLI> DROP ALERT
Alert 2 successfully d
CellCLI>
DROP ALERTHISTORY ALL
Removes all alert from the
cell’s alert history.
CellCLI> DROP ALERT
Alert 1_1 successfully
Alert 1_2 successfully
Alert 1_3 successfully
Alert 1_4 successfully
Alert 1_5 successfully
Alert 1_6 successfully
DROP THRESHOLD <THERSOLD>
Removes specific threshold
from the cell
CellCLI> DROP THRE
Threshold db_io_rq_s
dropped
CellCLI>
DROP THRESHOLD ALL
Removes all threshold from
the cell
CellCLI> DROP THRE
DROP QUARANTINE <quarantine1>
Removes quarantine from the
cell
CellCLI> DROP QUAR
DROP QUARANTINE ALL
Removes all the quarantine
from the cell
CellCLI> DROP QUAR
DROP GRIDDISK <Griddisk_Name>
Removes the specific grid disk
from the cell
CellCLI> DROP GRIDD
GridDisk UADBDK1 s
CellCLI>
Removes the set of grid disks
from the cell by using the
prefix
CellCLI>
GridDisk
dropped
GridDisk
dropped
GridDisk
dropped
GridDisk
dropped
GridDisk
dropped
GridDisk
dropped
DROP GRIDDISK ALL
PREFIX=<GRIDDISK_STARTNAME>
DROP GRIDDISK <GRIDDISK>
ERASE=1pass
Removes the specific grid
disks from the cell and
Performs secure data deletion
on the grid disk
DROP GRIDD
UAFLSHDB_
UAFLSHDB_
UAFLSHDB_
UAFLSHDB_
UAFLSHDB_
UAFLSHDB_
CellCLI> DROP GRIDD
UADBPROD_CD_DISK
GridDisk UADBPROD_
successfully dropped
CellCLI>
DROP GRIDDISK <GRIDDISK> FORCE
Drops grid disk even if it is
currently active.
CellCLI> DROP GRIDD
UADBPROD_CD_DISK
GridDisk UADBPROD_
successfully dropped
DROP GRIDDISK ALL HARDDISK
Drops griddisks which are
created on top of hardisk
DROP GRIDDISK ALL
Modifying the Exadata Storage cell Objects (ALTER)
The below mentioned commands will help you to modify the cell attributes and various
objects setting. ALTER command will be used to perform the start/stop/restart the
MS/RS/CELLSRV services as well.
ALTER ALERTHISTORY 123
examinedby=<user_name>
Sets the examinedby attribute of alerts
ALTER ALER
examinedby
ALTER CELL RESTART SERVICES ALL
All(RS+CELLSRV+MS) services are
restarted
CellCLI>ALT
CellCLI>ALT
ALTER CELL RESTART SERVICES < RS |
MS | CELLSRV >
To restart specific services
CellCLI>ALT
CellCLI>ALT
CELLSRV
ALTER CELL SHUTDOWN SERVICES ALL
All(RS+CELLSRV+MS) services will be
halted
CellCLI>ALT
CellCLI>ALT
RS
ALTER CELL SHUTDOWN SERVICES < RS |
To shutdown specfic service
MS | CELLSRV >
CellCLI>ALT
MS
CellCLI>ALT
CELLSRV
ALTER CELL STARTUP SERVICES ALL
All(RS+CELLSRV+MS) services will be
CellCLI>ALT
started
CellCLI>ALT
ALTER CELL STARTUP SERVICES < RS |
MS | CELLSRV >
To start specific Service
CellCLI>ALT
CellCLI>ALT
CELLSRV
CellCLI> ALT
ALTER CELL NAME=<Name>
To Set the Name/Re-name to the
Exadata Storage Cell
Cell UAEXAC
CellCLI>
ALTER CELL flashCacheMode=WriteBack
To Modify the flashcache mode to
CellCLI> DR
writeback from writethrough. To perform
this,You need to drop the flashcache &
Flash cache
Stop the cellsrv .Then you need to
successfully
create the new Flashcache
CellCLI>
CellCLI> ALT
CELLSRV
Stopping CE
The SHUTDO
successful.
CellCLI>
CellCLI> AL
flashCacheM
Cell UAEXAC
CellCLI>
CellCLI> CR
celldisk=”FD
l1″, size=50
CellCLI> ALT
ALTER CELL
interconnect1=<Network_Interface>
To set the network interface for cell
stroage.
A restart of
new networ
CELLSRV co
until restart
altered
ALTER CELL LED OFF
The chassis LED is turned off.
CellCLI> ALT
ALTER CELL LED ON
The chassis LED is turned on.
CellCLI> ALT
ALTER CELL
smtpServer='<SMTP_SERVER>’
Set the SMTP server
CellCLI> ALT
smtpServer=
ALTER CELL
smtpFromAddr='<myaddress@mydomai
n.com>’
Set the Email From Address
CellCLI> ALT
smtpFromAd
ALTER CELL
smtpToAddr='<
[email protected]
m>’
Send the alrets to this Email Address
CellCLI> ALT
smtpToAddr
ail.com’
ALTER CELL smtpFrom='<myhostname>’ Alias host name for email
CellCLI> ALT
ALTER CELL smtpPort=’25’
Set the SMTP port
CellCLI> ALT
ALTER CELL smtpUseSSL=’TRUE’
Make the smtp to use SLL
CellCLI> ALT
ALTER CELL
notificationPolicy=’critical,warning,clear’
Send the alrets for critical,warning and
clear
CellCLI> ALT
notificationP
ALTER CELL notificationMethod=’mail’
Set the notification method as email
CellCLI> ALT
notificationM
ALTER CELLDISK
<existing_celldisk_name>
name='<new_cell_name>’,
comment='<comments>’
Modify’s the celldisk name
CellCLI> ALT
CD_DISK00_
comment=’
CellDisk UAC
ALTER CELLDISK ALL HARDDISK FLUSH
ALTER CELLDISK ALL HARDDISK FLUSH
NOWAIT
Dirty blocks for all harddisk will be
flushed
Allows alter command to complete
while flush operation continues on all
harddisks
CellCLI> ALT
FLUSH
CellCLI> ALT
FLUSH NOW
Flash cache
CellCLI>
CellCLI> ALT
CANCEL FLU
CellDisk CD
altered
ALTER CELLDISK ALL HARDDISK CANCEL
FLUSH
Previous flush operation on all harddisk
will be terminated
CellDisk CD
altered
CellDisk CD
altered
CellDisk CD
altered
ALTER CELLDISK <CELLDISK> FLUSH
Dirty blocks for specific celldisk will be
flushed
ALTER CELLDISK <CELLDISK> FLUSH
NOWAIT
Allows alter command to complete
while flush operation continues on
specific celldisk
ALTER FLASHCACHE ALL size=<size>
Resize the all Flash celldisks to specified
size
CellCLI> ALT
CD_DISK02_
CellCLI> ALT
CD_DISK02_
Flash cache
ALTER FLAS
CellCLI> ALT
ALTER FLASHCACHE ALL
All the flashsdisks will be assigned to
Flashcache
ALTER FLASHCACHE
The specified Flashcell disks be
Flash cache
successfully
CellCLI> AL
assigned to Flashcache &
CELLDISK=’
ell1′
other flashdisks will be removed
Flash cache
successfully
Dirty blocks for all Flashdisks will be
flushed
CellCLI> ALT
CELLDISK='<Flashcelldisk1>,<Flashcelld
isk2>’
ALTER FLASHCACHE ALL FLUSH
ALTER FLASHCACHE ALL CANCEL FLUSH
ALTER FLASHCACHE ALL FLUSH NOWAIT
ALTER FLASHCACHE CELLDISK=<FLASHCELLDISK> FLUSH
ALTER FLASHCACHE
CELLDISK=<FLASH-CELLDISK> CANCEL
FLUSH
Previous flush operation on all Flashdisk
will be terminated
CellCLI> ALT
FLUSH
Flash cache
successfully
Allows alter command to complete
while flush operation
CellCLI> ALT
NOWAIT
continues on all the flash celldisk
Flash cache
Dirty blocks for specific flash celldisk
will be flushed
Previous flush operation on specific
flash celldisk will be terminated
Do not modify the Exadata Storage cell configuration without notifying oracle support.
CellCLI> ALT
CELLDISK=F
Flash cache
successfully
CellCLI> ALT
CELLDISK=F
Flash cache
successfully