Title: Features and Functionality Training
1Features and Functionality Training
Thursday, June 11, 2015
2Introduction What Are We Doing Here?
- 1 day course
- Features Functionality of Satori Blade
- Hands-on labs
- Goals
- Integrating with Host Database
- Working with a Dataupia Array
- Non-goals
- Deploying a Dataupia Array
- Supporting a Customer Deployment
- Extensive Troubleshooting
- Logistics
- Restroom, food, breaks, phones, etc.
3Agenda
- Unit 1 Technical Overview
- Unit 2 The Management Console -- Demo
- Unit 3 Using the CLI Demo
- Unit 4 The Dynamic Aggregation Studio -- Demo
- Unit 5 Delegating Tables -- Lab
- Unit 6 Loading Data -- Lab
- Unit 7 Basic Troubleshooting
4Unit 1 Technical Overview
- Host Client, Blades and Arrays
- High Availability - Drives
- Typical DBMS Stack
- MPP Database Architecture
- Query Stack
- Dataupia Data Loader
- Host DB Connectors Oracle, SQL
- Management Back Plane
51 Host Client, Blades and Arrays
OS Solaris, Windows Database Server Oracle,
MS-SQL Server
61 Dataupia Client Components
- Native Database Oracle, SQL Server
- Data Loader
- Dataupia Client Software
- Database Plug-in
- Dataupia Drivers
71 Dataupia Satori Server Components
- Management Backplane
- Management Console
- CLI
- Database Engine
81 Typical Dataupia Array - Physical Components
Network Switch
Terminal Server
Additional Components
Dataupia Satori Servers
DA Blade
Loader Blade
KVM
Switched PDU (2)
91 High Availability Drives
- OS - Loaded on internal Flash Drive
- Data Storage - 8 hot swappable drives
Flash Drive
RAID Controller RAID-5
hot spare
Read-only Flash drive with 2 boot partitions used
for OS.
7 drives in RAID-5 array 1 drive allocated as
hot spare
101 Typical IT Architecture
Storage
Platform
Database
Existing Oracle Database
Existing Oracle Database
Standard App Interface (ODBC, JDBC, SQL)
Standard Interconnect Interface (SAN / NAS)
Existing MS-SQL Database
111 Dataupias MPP Architecture
Dataupia Satori Blades
Platform
Database
Existing Oracle Database
Dataupia Drivers
Standard App Interface (ODBC, JDBC, SQL)
Existing Oracle Database
Existing MS-SQL Database
121 Query Stack - Transparency
Host Server
Global Services
Dataupia Array
Existing Oracle Database
Management Back Plane
131 Management Backplane
- Provides wrapper for OS user has a protected
shell - Compact Flash drive (2 OS images on blade)
- Diagnostic utilities
- Broadcast upgrades (new image installed to
non-booted partition) - Centralized Management Framework
- CLI language
141 Global Services
- Bind blades into an array
- Root Service
- Blade Service
- Database Service
- ID Service
- Transaction Service
- Blade Daemon runs on each blade
Existing Database
DT Client
Dataupia Satori Blade
DataupiaGlobal Services
151 Dataupia Data Loader
Host Server System with DTclient installed
Dataupia Array
dtldr
Binary /CSV
16Unit 2 The Management Console
- General navigation
- Health of blades and array
- Administration features
- Personalization
- Query management
- Upgrading blade software
- Online help
172 General Navigation
Menus
Navigation Pane
Tabs
Health Charts
Statistical Snapshot
182 Health of Blades and Array
19Blade Health
202 Administration Features
Tab Menus
Unassigned Blades
212 Personalization
222 Notifications
232 Query Management
242 Query Actions
- Terminate Forces a query process to terminate
within a short time, giving it a chance to finish
its work and produce partial results before
ending - Kill Ends a query process immediately, with no
chance for any results - Raise Priority Gives a query process a higher
processing priority on the array, which may
enable it to fully execute at a normal or
near-normal rate - Lower Priority Leaves a query process active
(for example, if you want to preserve the current
situation for further analysis) but lessens its
impact on overall system performance.
252 Upgrading Blade Software
262 Online Help
27Demo Unit 2 The Management Console
- Configure health charts, warnings, logging
- Set up email recipients
- Review and take action on queries
- Upgrade the software image
28Unit 3 CLI
- Types of Users
- Dataupia User Commands
- Dataupia Support commands
- Using CLI commands
- Writing a script
293 CLI Overview
- Provides alternatives to the DMC for information
and some actions - Connect a keyboard and monitor to the blade, or
use an SSH connection to the head blades IP
(same address you load in your browser for the
DMC - Log in with the same username password as on
DMC - Three modes standard, enable and configure
- Interactive help lets you check the usage and
options for any command and subcommand.
303 CLI Command Modes
- Standard mode
- Active when you first log in.
- Enable mode
- Enter with enable command. View all available
information. - Take some actions but not configuration changes.
- Configure mode
- Enter with configure terminal command. Make
configuration changes. - Prompts indicate mode you are in
- blade101 gt (standard)
- blade101 (enable)
- blade101 (config) (configure)
313 Command Help and Completion
- Enter ? on the command line to see a list of
commands available in the current mode. - Use ? following a partial word to narrow the
list for example t? in standard mode displays
terminal, telnet, traceroute commands. - Follow a command or subcommand with ? to see
usage and options. For example, in configure
mode - blade101 (config) image ?boot
Specify which system image to boot by
defaultfetch Download a system image
from a remote hostinstall Install an
image file onto a system partition - blade101 (config) image boot ?ocation
Specify location from which to boot systemnext
Boot system from next location after
one currently booted - blade101 (config) image boot location ?1
Boot from location 12
Install to location 2 - Use Tab key to complete unambiguous commands,
options, and arguments.
323 Show Commands
- Use show to display information about the blade
you are logged into and - the array it is part of. Use ? to check the usage
and options. Important show commands
show array Display array properties, status and blade membership
show blade Display Dataupia blade configuration
show dtstore Display dtstore configuration
show queries Display queries running on a blade
show query Display details of a specific query
show logging Display logging configuration
show raid Display RAID controller, unit, and drive status.
show email Display current email notification settings
333 Array and Blade Commands
These commands let you configure the blade and
the array. (Remember to use ? to check the usage
and options, and see the User Guide.)
array createjoin Create a new array and make the current blade the head blade. Services are restarted. You can set the array name, the array ID, and the database port number, or the system will generate them for you.
blade join Join the local blade to an existing array and restart services. If you specified a database port in the createjoin command, you must specify the same port here. In this release you cannot join a blade to an array with data on it.
blade restart Restart all services on the local blade, or only global services.
blade reload Reboot the blade without powering off.
blade shutdown Shut down the blade and power it off.
image Fetch a software image, install a software image, boot the partition on which you have installed an image.
email Enable and configure email notifications to be sent to specified addresses when specified events occur. The no prefix lets you cancel email configuration. (Many options, use email ? to get started.)
Write memory Save configuration changes to the configuration database.
343 Other Useful Commands
Use ? to check the usage and options.
cli Configure CLI options
clock Set the time, date, and timezone.
ping A network tool used to test whether a particular host is reachable across an IP network
traceroute A network tool used to determine the route taken by packets across an IP network
slogin Connect to another blade or system using ssh
telnet Connect to another blade or system using telnet
terminal Configure terminal display options
35Demo Unit 3 CLI
- Log in and use different CLI modes
- Use show commands to display available
information about the array, blades, other
arrays, software images, and various settings - Configure email notification settings
- Kill a query
- Display the logging configuration
36Unit 4 The Dynamic Aggregation Studio
- You can use the DA Studio to create Aggregates,
or Data Cubes - With Aggregates you can view and manipulate data
in multiple dimensions. - Aggregates consist of dimensions and measures.
- Measures items that are counted, summarized,
averaged, etc., such as costs or units of
service. - Dimensions the columns that the measures will
be grouped by, such as dates or locations.
374 General Navigation
Navigation Pane
Detailed Information for Selected Folder
Server Information
384 Select Input Data
394 Select and Compile Dimensions
404 Create and Build the Aggregate
414Query the Aggregate
42Demo Unit 4 Creating an Aggregate with the Demo
Project.
- For training purposes, a demo project is provided
as part of the Dynamic Aggregation Studio
installation. - The demo project comes with an input data file
already loaded. - In this demo we will create an aggregate using
the demo project.
43Unit 5 Delegating Tables
- Data distribution on Dataupia arrays
- Delegating native database tables to Dataupia
445 Data Distribution Methods for Array Tables
Method Data Allocation
round robin Uniform serial across blades (default)
single All on one blade
hashed Hashed by column across blades
all All on all blades
all
hash by column
single
round-robin
455 Choosing Round-Robin Distribution
- Records are distributed serially and uniformly
across blades -one row to each blade in repeated
sequence. - The default method.
- Records are distributed this way
- Guarantees even distribution of data across
blades with no data analysis required. - Use when there is no natural distribution key.
- Best suited to fact tables.
Blade 1 Blade 2 Blade 3 Blade 4
465 Choosing All-Blade Distribution
- Tables are co-located on every active blade in
the array. All records are copied to all blades. - Records are distributed this way
- Required for dimension tables that participate in
joins - Ensures that factdimension joins will process in
parallel and not require cross-blade execution
Blade 1 Blade 2 Blade 3 Blade 4
475 Choosing Hashed Distribution
- Records are distributed by a deterministic hash
function using the specified column(s) as
distribution key. - Record distribution depends on key but should be
close to even
- Requires unique or nearly unique distribution key
to ensure acceptably even distribution. - The distribution key must be non-volatile and not
nullable, as well as unique or nearly unique.
Blade 1 Blade 2 Blade 3 Blade 4
485 Choosing Single-Blade Distribution
- Each table is located entirely on a single blade
- Records are distributed this way
- Single distribution is appropriate for smaller
tables and less frequently used tables
Blade 1 Blade 2 Blade 3 Blade 4
495 Delegating Tables and Data
- Extract Oracle tables to CSV (or Binary) files
- Register a tables data distribution method
- Create the table on the array, including indexing
- Create synonym or view in host database
505 Delegation Process Overview
Host DB
Dataupia array
Table A
Table A
data
data
delegation
dttable
regtable
Table B
Table B
data
data
data A
Extract
dtldr
data B
515 The Delegator (1 of 3)
- Java-based tool with a simple web interface
- Load http//oracle_host11521/delegator into your
browser - Log in as Oracle user with rights to perform DDL
operations on the database instance
525 The Delegator (2 of 3)
535 The Delegator (3 of 3)
545 Creating Array Tables Manually
- At shell prompt
- Register a data distribution specification for
the array table using the regtable command - Create the array table using the dttable create
command - In Oracle
- Change the name of the Oracle table
- Create a synonym and/or view in Oracle to the
array table
555 Manual Step 1 - Distribution Commands
regtable Register a table with the specified
distribution method. If no method specified
defaults to round robin. regtable ltDT_systemidgt
lttbnamegt singlerrdistmapall
col1, chtable Change a tables existing
distribution method. chtable ltDT_systemidgt
lttbnamegt singlerrdistmapall col1, DO
NOT change the distribution method of table with
data in it.
565 Manual Step 2 - The dttable Command
- Creates, alters, truncates, drops tables on the
Dataupia array - Usage
- dttable ltcommandgt -t tablename options
- dttable create -t lttablenamegt -c ltcolumn namegt
ltcolumn data typegt , ... ... -f ltcolumn
definition filegt - dttable rename -t ltold tablenamegt -n ltnew
tablenamegt - dttable add_column -t lttablenamegt -c ltcolumn
namegt ltcolumn data typegt - dttable alter_column -t lttablenamegt -c ltcolumn
namegt -n ltnew column data typegt - dttable rename_column -t lttablenamegt -c ltold
column namegt -n ltnew column namegt - dttable drop_column -t lttablenamegt -c ltcolumn
namegt - dttable create_index -t lttablenamegt -i
ltindexnamegt -c ltcolumn namegt , ltcolumn
namegt... ...
575 Manual Step 3 - Rename Oracle Tables
- Rename the Oracle tables so that queries that
reference the now-delegated tables bypass the
original tables. - Example
- my_table1 is now on the Dataupia array.
- Rename my_table1 to my_table1_orig
- Oracle syntax
- alter table my_table1 rename to my_table1_orig
585 Manual Step 4 - Create Oracle References to
the Array Table
- Use the original names as a reference to the
array table in one of two ways - Create an Oracle synonym for the array table as a
remote object - CREATE SYNONYM MYSYNONYM FOR MYTABLE_at_DTNAS
- Create a view of the array table in Oracle
- CREATE VIEW MYVIEW AS (SELECT FROM
MYTABLE_at_DTNAS) - The reference then replaces the Oracle table
- SELECT from MYSYN SELECT from
MYTABLE_at_DTNAS - - or -
- SELECT from MYVIEW SELECT from
MYTABLE_at_DTNAS
595 Indexing
- Native indexes delegated to Dataupia are retained
- Additionally, Dataupia uses indexing approaches
optimized for large data workloads - Disk indexing supports record-based optimized
storage and rapid retrieval - Dataupia indexing is transparent to the
application - Optimized Hilbert r-tree Index
- Built-in index for every table
- Designed for clustered data in which target rows
are physically close - Example time-sequenced data loaded in
chronological order and often queried by date or
time - Balanced Bucket Index (BBI)
- Explicit definition occurs when you use the
dt_cli utility - Designed for data in which the target rows are
physically dispersed - Example in queries against non-chronological
columns such as phone number
60Lab Unit 5 Delegating Tables
- Unload Oracle tables prior to delegating
- Delegate existing tables using delegator
- Create and register array tables using dttable
- Rename tables on Oracle
- Create and test Oracle view/synonym
61Unit 6 Dataupia Data Loader
- How it works
- Writing data description files for CSV and binary
data - Command line options and scripting
- dtlscan testing utility
- Potential errors and troubleshooting the Loader
626 How the Data Loader Works
636 How the Data Loader Works
varchar
646 Data Type Mapping
656 Data Description Files
Description file for CSV (ascii) data file with
directive to omit trailing characters
VERSION(1)CONTROL set-mode(ascii) set-record-size(variable) ENDCONTROL DATA ( RECORD_TYPE string(1) SEQ_NUM int(8) RECORD_NUM int(3) ORIG_NUM string(32) ROUTE string(7) JUNK skip(1) ) ENDDATA
Description file for binary data file with
modification directive for first field
VERSION(1)CONTROL set-mode(binary) ENDCONTROL DATA ( CALL_DATE int(8) string(14) datetime(YmdHMS) OPERATOR string(5) int(4) HR_NUM int(1) ST_CALL_TIME int(2) POI_NNI int(4) SEQNO int(8) DIALLED_DIG_STR string(31) WHOLESALE_PRC float(8) ) ENDDATA
EXTENSION / ENDEXTENSION is an optional section
for defining transformations and operations
666 Description File CONTROL Section Directives
Directive Description Use
set-mode() binary or ascii required
set-record-size() fixed (binary or ASCII) or variable (ASCII only) required (even for binary files)
set-endian() little or big (binary only) optional for binary files
set-delimiter() set-terminator() set-quote() Define characters used in a variable record (delimited) ASCII file to separate fields within a record indicate end of each record quote data within a field (when parsing directive extracts quoted data) Arguments can be single literal character set-delimiter() or back-quoted escape sequence such as \t (tab), \r (return) or \n (newline) set-terminator(\n) required for variable record ASCII files however, default delimiter is comma (,) and default terminator is RETURN, so set-delimiter() and set-terminator() not required for standard CSV files.
676 Description File DATA Section Parsing
Directives
Directive Reads from Data Stream Creates Univ Type Compatible With Dataupia Datatypes
int(n) integer of size n 1-8 (binary files) n 1-20 digits (fixed record ASCII files) up to n 1-20 digits (variable record ASCII files) INT (integer) char, smallint, integer, bigint (if values fit datatype) date, time, timestamp (if values fit UNIX-style UTC time)
string(n) string of n gt 1 chars (binary and fixed ASCII files) up to n gt1 chars (variable ASCII files) STRING (string) char, char(), varchar()
float(n) floating point number (IEEE 754 format) of size n 4 or 8 (binary files) n 1-20 digits (fixed record ASCII files) up to n 1-20 digits (variable record ASCII files) FLOAT (single/double precision floating point) double precision, real (if values fit datatype)
datetime (format) date or time string format determines which digits represent which time units (see User Guide) DATETIME (date time) date, time, timestamp
number(m,n) arbitrary precision decimal number with m digits before decimal and n after up to m digits before decimal , up to n after NUMBER (general numeric) numeric smallint, integer, bigint, double precision, real (if values fit datatype)
skip(n) Read nothing, instead skip next n gt 1 bytes of input stream (binary and fixed ASCII files) until next delimiter (variable record ASCII files) noneno data noneno data
686 Description File Modification Directives
- Use int(), string(), and datetime() to modify
parsed data as needed. For example - CALL_DATE int(8) string(14)
datetime(YmdHMS) - Binary data file contains eight-byte binary
integers encoding ten-digit decimal timestamps,
e.g. timestamp 2007-02-10 160922 is represented
as integer 20070210160922 and encoded in the
binary value 0x00001240f5b4491a - Parsing directive int(8) converts eight bytes of
input stream to decimal integer - Modification directive string(14) converts digits
of integer to 14-character string - Modification directive datetime() converts
string to timestamp, with first four characters
as year and remainder as two-digit month, day,
hour, minute, seconds - Field is loaded into Dataupia table column
CALL_DATE of type timestamp
696 Scripting the Loader
- Here is a script to
- load all data files with extension .data from
directory datadir - load into table bigtable on array with ID
12345678 - using description file bigtable.f
- logging to bigtable.log
- writing information about loaded files to array
table loaded_bigtable - writing information allowing for 10 error records
to bad_records_bigtable
find datadir -name ".data" dtldr C
dtarrayid12345678 -D bigtable.f -T bigtable -E
10 -L bigtable.log
706 Loader Command Line Options
-C dtarrayidltarray_idgt Array on which target table is located (required)
-T lttable namegt Specify the target array table (required) (Can also be used to specify names of the files loaded table and error table on the array, as well as the templates used to create these tables. See the User Guide.)
-D ltdescription filegt Data description file (path) (required)
-f ltload filegt Load the specified file. Without this option, loader reads file names from standard input as long as it remains open.
-u Remove each file upon successful loading.
-X lt recordsgt Create a transaction checkpoint and commit data after loading the specified number of records.
-E ltcountgt Error threshold. If ltcountgt is negative, abort dtloader after ltcountgt errors. If count is positive, do not about and record no more than ltcountgt errors in the error table. Default is -1 (abort on first error).
-Q Quiesce (hold a lock on) the table, preventing execution of queries against it during the load.
-L ltfilegt Loader log file (path).
-r ltfilegt Report loading rates into the specified file (path) at every checkpoint (diagnostic option).
716 Loader Diagnostics
- Use the T option for dtldr to name status tables
and specify templates - loaded table
- fkey A file key (integer) to identify the bad
records source - fname Name of data file (varchar(200)) passed to
the loader - fmtime The files system time (timestamp without
time zone) - nrecords Number (bigint) of records in the
specified data file - bad record table
- fkey A file key (integer) to identify the bad
records source - field Name (varchar(120)) of the field being
parsed when the error was detected - rec_offset Offset (bigint) from the beginning of
the data file (in bytes) of the bad record - error Code (integer) describing the nature of the
error - input_data The bad record as a hex-encoded string
(varchar(32000))
726 dtlscan Utility
- dtlscan D data_description -f data_file p E
o t n e -r
Scans, analyzes and converts data files using
data description files to predict or isolate
errors encountered by dtloader. Does not interact
with the database server or the array.
-D data_description Parse data using the specified data description file (required)
-f data_file Parse data from the specified file (required)
-p Parse the data file specified by f using the data description specified by -D
-E Continue parsing after errors
-o Write parsed data to standard output in CSV fornat
-t Embed parsed datatypes in CSV output when o is used
-n Output NULLs as (NULL) (to render them visible) when o is used
-e Echo data in the file specified by f to standard output
-r Report data record statistics
736 Potential Data Loader Problems
- Description File
- Syntax errors
- Does not match input data
- Does not match target table
- Field needs further modification to be compatible
with target column - Data Errors
- NULL data
- Mismatched or unsupported format
- Illegal value
746 Troubleshooting
- Four ways to investigate errors
- Use dtlscan with p and E options to verify
description file and isolate bad records before
loading. - Review the loaded files table on the array for
information about data files loaded into the
target table. - Review the bad records table on the array for
information about records that could not be
loaded into target-table and generated an error. - Use the dtldr l logfile option to write
information about errors to a log file and review
the log contents. - Setting the error limit with the dtldr E
-count option - Negative count - of errors to record before
dtldr aborts - Positive count - of errors to record before
dtldr stops recording (but continues execution)
756 The dtunload Command
- Purpose
- Unloads data from Dataupia array tables. Useful
for archiving or back-ups. - Usage
- dtunload -C dtarrayidltarray_IDgt -q ltquerygt
-o ltoptionsgt - Arguments
- ltarray_IDgt Array ID of the array on which the
table is located - ltquerygt Query to obtain desired rows from table,
of the form select ltcol1gt,col2, from lttablegt - ltoptionsgt Options for formatting unloaded data,
including CSV for CSV format and options to
specify - delimiter, null, quote, and escape characters
- Arguments with spaces or other separators must be
quoted. - Default is standard output. Redirect to a file.
76Lab Unit 6 Loading Data
- Analyze and understand input and data
description files - Use dtlscan to test and correct adescription
file - Write a data description file
- Use dtldr to load data
- Review results of load including loaded_ and
bad_records_ tables on the array - Truncate target table, fix errors, and reload
77Unit 7 Troubleshooting
- RAID-5 failover
- Replacing a failed drive
- Troubleshooting network problems
- Restarting a blade
- Replacing a blade
- Getting support
787 RAID-5 Disk Drive Failover
- Each blade has eight drives - seven active drives
in a RAID5 configuration one hot spare - If a drive fails, RAID fails over to the spare,
no data lost - Blade operates in degraded mode (low disk space,
degraded performance) until failed drive is
rebuilt on spare - After rebuild, the spare should be replaced as
soon as possible
797 Replacing a Failed Drive
- Directly verify that you are working on correct
blade - Locate drive to be removed
- Record number/location of failed drive
- Move release lever to right and pull on release
tab - Pull drive by gripping with fingers and pulling
out - Push new drive into bay as far as possible, close
release lever fully - The new drive becomes the hot spare
807 Troubleshooting Network Problems
- ping blade with suspected problem from another
device on the network (or all blades if problem
is not yet isolated). - If ping fails, confirm that blade is on and
operating. - Connect keyboard/monitor and try ping from
suspect blade. - Check blades cable connections to the network
switch. Are lights green at both ends? Try
replacing cable or a different switch port. - Try bypassing switch and connecting directly to
network. If blades HBA has failed, chassis must
be replaced. - Use the blade restart command to restart network
services, or blade reload for a warm reboot.
817 Restarting a Blade
- Different restart commands have different
effects - blade restart
- Restart all services on the local blade.
- If issued with globalsvcs argument, restart only
global services. - blade reload
- Reboot the blade without powering it off (warm
restart) - blade shutdown
- Shut down the blade and power it off (turn on
again to reboot)
827 Replacing a Chassis
- If you have eliminated a failed drive or network
issue as the root problem, the chassis must be
replaced. - Generally a chassis must be replaced if the CPU,
HBA, memory, disk controller, power supply, or
fan fails. - The array will be inaccessible while the chassis
is getting replaced. - No data loss is incurred by a chassis
replacement. - Contact Dataupia Support to arrange for chassis
replacement.
837 Getting Support
- Information resources
- Logging a case
- Checking status
- Communication
847 Finding Information
- Product Usage
- Product Documentation
- DMC Online Help
- Release Notes
- Knowledge Base
- Engineering Briefs
- Troubleshooting
- DMC Health Tab
- Release Notes
- Dataupia Satori Server User Guide, Chapter 5,
Troubleshooting - Knowledge Base
- Log Files
857 Logging a Case
- The Dataupia Helpdesk is staffed 9-6 EST (GMT-5)
- Phone 866-259-5971
- Email support_at_dataupia.com
- The Portal is always open
- http//www.dataupia.com
- Click the Customer Login link at the top right of
the window.
Password provided to you by Dataupia
867 Customer Portal Home Page
877 Logging a Case
- Helpful information to include in the Description
field - DataupiaTM Satori Server serial number
- Contact information for the person that will
troubleshoot the problem with the Dataupia
Support Engineer - Error codes recorded on the equipment displays or
trapped by the host - What has been done so far to isolate the problem
887 Reviewing Cases
Double-click on a Case number to view status or
solution and provide additional information
Knowledge base of Solutions
897 Working with Dataupia
- After troubleshooting and raising a ticket
- Who do they go to first
- Second
- What will work best to help you champion their
issues
90Summary
- Survey
- Is there anything else we should add to class?
- Do you feel confident about what you learned?