Optimizing interactive performance for longdistance remote observing - PowerPoint PPT Presentation

About This Presentation
Title:

Optimizing interactive performance for longdistance remote observing

Description:

Caltech experiments with NASA satellite. UCSC experiments with Internet-2 link ... All sites have identical view and see each others actions ... – PowerPoint PPT presentation

Number of Views:91
Avg rating:3.0/5.0
Slides: 87
Provided by: ucol4
Learn more at: https://www.ucolick.org
Category:

less

Transcript and Presenter's Notes

Title: Optimizing interactive performance for longdistance remote observing


1
Optimizing interactive performance for
long-distance remote observing
  • Robert Kibrick and Steven L. Allen
  • University of California Observatories / Lick
    Observatory
  • Al Conrad and Gregory D. Wirth
  • W.M. Keck Observatory
  • Advanced Software and Control for Astronomy
  • Orlando, Florida May 26, 2006

2
Remote observing with the Keck Telescopes The
first 10 years
  • Robert Kibrick and Steven L. Allen
  • University of California Observatories / Lick
    Observatory
  • Al Conrad and Gregory D. Wirth
  • W.M. Keck Observatory
  • Advanced Software and Control for Astronomy
  • Orlando, Florida May 26, 2006

3
Overview of Presentation
  • Background
  • Historical evolution of Keck remote observing
  • The Keck Remote Observing Model
  • Remote observing from Waimea and California
  • Redirecting displays
  • Using X protocol
  • Using VNC protocol
  • Advantages and disadvantages of using VNC
  • Current usage patterns
  • Scheduling issues
  • Future plans

4
The Keck Telescopes
5
From 1993 to 1995, all Keck observing was done at
the summit
  • Observers at the summit work remotely from
    control rooms located adjacent to the telescope
    domes

6
Conducting observations involves coordinated
effort by 3 groups
  • Telescope operator (observing assistant)
  • Responsible for telescope safety operation
  • Keck employee normally works at summit
  • Instrument scientist (support astronomer)
  • Expert in operation of specific instruments
  • Keck employee works at summit or Waimea
  • Observers
  • Select objects and conduct observations
  • Employed by Caltech, UC, NASA, UH, or other

7
Keck 2 Control Room at the Mauna Kea Summit
  • Telescope operator, instrument scientist, and
    observers work side by side, each at their own
    remote X Display

8
Observing at the Mauna Kea summit is both
difficult and risky
  • Oxygen is only 60 of that at sea level
  • Lack of oxygen reduces alertness
  • Observing efficiency significantly impaired
  • Altitude sickness afflicts some observers
  • Some are not even permitted on summit
  • Pregnant women
  • Those with heart or lung problems

9
Keck 2 Remote Control Room at the Keck
Headquarters in Waimea
  • Observer and instrument scientist in Waimea use
    video conferencing system to interact with
    telescope operator at the summit

10
The initial model for Keck Remote Observing
  • All observing applications run on summit control
    computers
  • All displays are re-directed to display hosts at
    each site

11
Why did Keck initially choose this approach?
  • Operational Simplicity
  • Operational control software runs only at the
    summit
  • All users run identical software on same computer
  • Simplifies management at each site
  • Allowed us to focus on commonality
  • Different sites / teams developed instrument
    software
  • Large variety of languages and protocols were
    used
  • BUT all instruments used X-based GUIs
  • Legacy GUI applications (e.g., not web-based)

12
Initiative to support remote observing from Keck
Headquarters
  • 1995 Remote control rooms built at Keck HQ
  • 1996 Remote observing with Keck 1 begins
  • 1997 gt50 of Keck 1 observing done remotely
  • 1999 remote observing gt90 for Keck 1 and 2
  • 2000 remote observing became the default mode

13
Remote Observing from Waimea is not cost
effective for short runs
  • Round trip travel time is 2 days
  • Travel costs gt 1,000 U.S. per observer
  • About 50 of runs are for 1 night or less
  • Cost / run is very high for such short runs
  • Such costs limit student participation

14
Explore the feasibility of remote observing from
the mainland
  • Initial experiments 1996-2000
  • Caltech experiments with NASA satellite
  • UCSC experiments with Internet-2 link
  • 2000-2001 ISDN fallback tests at UCSC
  • 2001 ISDN router installed at Keck summit
  • 2001 Prototype facility at UCSC online
  • UCSC facility used as model for other sites

15
Keck Observatory remote observing sites
  • Location First Use
  • Remote Ops 1, Waimea, HI 1996
  • Remote Ops 2, Waimea, HI 1997
  • UC Santa Cruz, CA 2001
  • UC San Diego, CA 2003
  • Caltech, Pasadena, CA 2004
  • UC Berkeley/LBNL, CA 2005
  • UC Los Angeles, CA 2006

16
Type your question here, and then click Search.
17
Santa Cruz Remote Observing Facility
  • Remote observer in California uses video
    conferencing system to interact with colleague in
    Waimea

18
UC Los Angeles (UCLA) remote observing facility
  • The newest Keck remote observing facility in CA

19
UC Los Angeles (UCLA) remote observing facility
20
Type your question here, and then click Search.
21
Limitations of using X to redirect displays to
the mainland
  • Sluggish performance for some operations
  • very slow application startup compared to Waimea
  • slow creation of new windows and pulldown menus
  • guider display update rate is too low
  • Inability to share single-user applications
  • figdisp realtime image display (LRIS, HIRES,
    ESI)
  • various data reduction packages
  • Some applications sensitive to inter-site font
    variations

22
Virtual Network Computing (VNC)
  • VNC server provides shareable virtual desktop
  • VNC clients (viewer) offer remote access to that
    desktop
  • All clients share that desktop (application
    sharing)
  • All state is retained in the server, none in the
    clients
  • Clients can connect/disconnect without affecting
    session
  • VNC protocol works at the framebuffer level
  • VNC available on most OS / windowing systems

23
Redirecting displays to remote sites using VNC
protocol ssh
  • An ssh port-forwarding tunnel is used to relay
    VNC protocol packets and authentication across
    network firewalls to the remote site.

24
Benefits of using VNC
  • Single-user applications can be shared between
    sites
  • Shared desktop promotes training and
    collaboration
  • All sites have identical view and see each others
    actions
  • Shared desktop persists even if remote VNC client
    crashes
  • Optional read-only sharing for look but dont
    touch
  • X clients connect to a local X server (short
    RTTs)
  • X client applications run on control computer at
    summit
  • VNC server runs on computer at the summit
  • X clients see the VNC server as their local X
    server
  • Speeds up client functions that require multiple
    X transactions
  • Application startup and initial painting of
    displays
  • Creation of pop-up windows and sub-panels
  • Instantiation of pull-down menus
  • Loading of fonts

25
Disadvantages of using VNC
  • Some operations are slower across a low bandwidth
    link
  • iconify / de-iconify operations are very slow
    (no backing store)
  • color map scrolling is slower than under a
    native X connection
  • Ssh has no built-in capability for forwarding VNC
    packets
  • must start port-forwarding tunnel (ssh L)
    before VNC viewer
  • Shared desktop is sometimes a source of user
    confusion
  • Keyboard / mouse input from all connected
    clients are merged
  • Users at different sites could type or move
    mouse at same time
  • Potential for multiple users to create
    conflicting inputs
  • These conflicts can be reduced via use of
    '-viewonly' mode
  • ADVANTAGES OF VNC OUTWEIGH THE DISADVANTAGES

26
Issues and Interactions between VNC and X
  • X Visuals
  • Depth 8-, 16-, and 24-bit
  • PseudoColor, StaticColor, and TrueColor
  • Many X servers support multiple X visuals
  • VNC server supports only 1 visual at a time
  • VNC server must satisfy least capable client
  • Legacy X clients need 8-bit PseudoColor

27
Working within VNC's constraints8- or 24-bit
desktops?
  • We run a mix of 8-bit and 24-bit VNC desktops
  • Distinguish by using distinct desktop BG color
  • Use 8-bit VNC desktops wherever possible
  • More efficient image updates / colormap
  • Lots of legacy GUIs require 8-bit visuals
  • Use 24-bit VNC desktops when required
  • Some GUIs require 24-bit visuals
  • Mix of GUIs exhausts 8-bit color map

28
Example configurations
  • Sites typically have a 3-screen and 1-screen W/S
  • 3-screen W/S typically runs instrument s/w
  • 1-screen W/S typically runs telescope s/w
  • Run virtual window manager on local desktop
  • Have gt 4 window panes on each local screen
  • Run 8-bit VNC viewer in one window pane
  • Run 24-bit VNC viewer in another pane
  • Other panes allow access to local desktop

29
Other options explored
  • Xinerama
  • Allows multiple screens to be treated as one
  • Windows can span screens or move between
  • Does not work well in conjunction with VNC
  • Graphon's GO-Global product (proprietary)
  • Provides very efficient remote access
  • All state saved on server side (like VNC)
  • Does not provide desktop sharing

30
Other options explored
  • x11vnc
  • Allows access to a non-virtual X11 desktop
  • Remote access w/out running within VNC
  • Relies on physical polling of frame buffer
  • Useful for remote troubleshooting
  • Current version is prototype / pre-release
  • Not yet sufficiently robust for remote observing

31
Two Modes of Remote Observing from the Mainland
  • Remote eavesdropping (approximately 90)
  • At least one member of observing team in Waimea
  • Other members of the team work from the mainland
  • Observers in Waimea have primary responsibility
    for operations
  • Observers on the mainland can 'eavesdrop' via VNC
  • Observers on the mainland can also operate
    instrument
  • Mainland-only (approximately 10)
  • All members of observing team observe from
    mainland site(s)
  • Observers on the mainland have sole
    responsibility for operations

32
Current Usage Statistics for Keck mainland remote
observing
  • Overall usage has increased with growing number
    of sites
  • First three months of 2006 average 8 nights per
    month
  • May 2006 12 nights of remote observing from
    mainland
  • Significant number of nights involve multiple
    mainland sites
  • Multi-site teams
  • Split nights (e.g., UCSC first half on night,
    UCLA second half)
  • Both telescopes
  • UCB/LBNL remote observer using LRIS on Keck I
    Telescope
  • UCLA remote observer using OSIRIS/LGS-AO on Keck
    2 Telescope

33
Scheduling Issues and current constraints
  • Waimea has two remote ops rooms, one for each
    telescope
  • Each mainland site has only a single remote ops
    room
  • Potential conflict if both observing teams from
    same site
  • 20 out of 181 nights (or 11) in 2006A semester
  • Of those 55 Caltech, 30 UCB, 10 UCLA, 5 UCSC
  • To date, no such conflicts have arisen
  • Only enough ISDN lines at summit to backup one
    site
  • Not a problem for split nights
  • A potential problem for mainland-only from two
    sites on same night

34
Future plans
  • Upgrade ISDN capacity at summit to support
    multiple sites (install more lines upgrade
    router, Summer 2006)
  • Installation of dedicated VNC server hosts at
    Keck summit
  • Continue to optimize VNC configurations and
    performance
  • Implement RO facilities at other Keck partner
    institutions
  • Develop scheme for dynamic allocation of VNC
    server s
  • Develop improved procedures for coordination
    between sites

35
Conclusion
  • Mainland remote observing (MRO) is operational
  • For all Keck optical instruments and most IR
    insts.
  • From 5 mainland sites (UCSC ,UCSD , Caltech,
    UCB/LBNL,UCLA)
  • Shared VNC desktops used for most remote ops.
  • Provides competitive performance for most GUIs
  • Provides acceptable performance for image
    displays
  • MRO efficiency would be enhanced by
  • A distributed image display server / client
    (VO?)
  • VNC server w/ simultaneous 8- 24-bit support

36
Acknowledgments
  • U.S. National Science Foundation
  • University of Hawaii
  • Gemini Telescope Consortium
  • University Corp. for Advanced Internet
    Development (UCAID)
  • Corporation for Education Network Initiatives in
    California (CENIC)

37
Author Information
  • Robert Kibrick, UCO/Lick Observatory
  • University of California, Santa Cruz
  • California 95064, U.S.A.
  • E-mail kibrick_at_ucolick.org
  • WWW http//www.ucolick.org/kibrick
  • Phone 1-831-459-2262
  • FAX 1-831-459-2298

38
END OF PRESENTATION !!!
  • SPARE SLIDES FOLLOW

39
Limitations of Remote Observing from Keck HQ in
Waimea
  • Most Keck observers live on the mainland.
  • Mainland observers fly gt 3,200 km to get to
    Waimea
  • Collective direct travel costs exceed 400,000
    U.S. / year

40
Keck Telescopes use Classical Scheduling
  • Kecks not designed for queue scheduling
  • Schedules cover a semester (6 months)
  • Approved proposals get 1 or more runs
  • Each run is between 0.5 to 5 nights long
  • Gaps between runs vary from days to months

41
Factors contributing to sluggish performance at
mainland sites
  • Lower inherent bandwidth
  • High round trip time (RTT) between Keck and
    California
  • Yields lower effective bandwidth for un-tuned
    TCP
  • Slows any functions requiring multiple round
    trips
  • High RTT limits benefit of tuning or compression
  • Routing and propagation delays change over time
  • Keck/California link hops / RTT
  • 1998 12 hops / 70 millisecond average RTT
  • 2004 22 hops / 100 millisecond average RTT
  • 2006 17 hops / 90 millisecond average RTT

42
Issues and Interactions between VNC and X
  • Colormap issues
  • slower scrolling if pixel retransmit needed
  • colormap flashing if insufficient colors
  • private color maps (Not supported in v4)
  • Whitepixel and blackpixel conflicts
  • Fonts are supplied by the X server
  • Using X model, X server is local to observer
  • Using VNC model, X server is at summit

43
An alternative topology for using VNC with ssh
and X
  • The VNC server and VNC viewer are both run on the
    same computer at the summit. The X display
    generated by the VNC viewer is re-directed to the
    remote site via a standard ssh tunnel.

44
Benefits of this topology
  • Retains most benefits of the first VNC topology
  • Single-user applications can be shared between
    sites
  • Observing X clients connect to a local X server
    (short RTTs)
  • Speeds up functions that require multiple X
    transactions
  • VNC not needed at remote sites only at the
    summit
  • Avoids ssh port-forward complexity

45
Using a mix of both protocols
  • Neither protocol is optimal for all applications
    sites
  • Some functions work better under X
  • image display functions (pan, zoom in / out,
    color map scroll)
  • iconifying / de-iconifying windows (if backing
    store enabled)
  • any functions more sensitive to bandwidth than
    to RTT
  • Some functions work better under VNC
  • creation of new windows, pop-ups, and sub-panels
  • instantiation of pull-down menus
  • any applications that are RTT sensitive (Keck
    guider eavesdrop)
  • Most output-only applications work OK using
    either

46
Multiple monitors facilitates a mix of X and VNC
protocols
  • For example, one monitor can display a shared VNC
    desktop while the others carry the redirected X
    displays of X clients running at the summit.

47
Measurements of remote performance ds9 startup
  • Local ds9 X client and server 6.6 seconds
  • Summit ds9 display redirected to mainland
  • direct to mainland X server 72.0 (76.0)
  • via mainland VNC viewer 11.0 (27.5)
  • via VNC viewer at summit 10.6 (26.2)
  • values in ( ) are without ssh compression
  • In-stream compression helps in all cases

48
Measurements of remote performance file chooser
popup
  • Local ds9 X client and server 3.0 seconds
  • Summit ds9 display redirected to mainland
  • direct to mainland X server 10.3 (11.9)
  • via mainland VNC viewer 3.0 ( 7.7)
  • via VNC viewer at summit 3.0 ( 6.8)
  • values in ( ) are without ssh compression
  • In-stream compression helps in all cases

49
Measurements of remote performance ds9 zoom in
  • Local ds9 X client and server 0.7 seconds
  • Summit ds9 display redirected to mainland
  • direct to mainland X server 4.0 (10.2)
  • via mainland VNC viewer 9.0 (17.0)
  • via VNC viewer at summit 6.5 (11.5)
  • values in ( ) are without ssh compression
  • In-stream compression helps in all cases

50
Measurements of remote performance ds9 draw cut
for plot
  • Local ds9 X client and server 0.0 seconds
  • Summit ds9 display redirected to mainland
  • direct to mainland X server 0.1 ( 3.0)
  • via mainland VNC viewer 4.0 (10.0)
  • via VNC viewer at summit 4.5 (13.5)
  • values in ( ) are without ssh compression
  • In-stream compression helps in all cases

51
Summary
  • Can redirect displays of legacy X clients
  • using X protocol
  • using VNC protocol
  • using a combination of both protocols
  • Choice depends on functionality of each X client
  • Good remote performance for most X GUI clients
  • Need better remote performance for ds9

52
Future directions
  • Explore distributed ds9-like image displays
  • ds9server an image / image section server
  • Runs on the instrument control computer on summit
  • Interfaces to multi-HDU FITS images on disk or in
    shmem
  • Extracts subset of pixels requested by display
    client(s)
  • Efficiently transmits pixels and WCS-info to
    display client(s)
  • ds9viewer an image / image section client
  • Runs at remote site and provides local GUI to
    observer
  • Convert GUI events into requests to transmit to
    ds9server
  • Receives pixel / WCS-info stream transmitted by
    ds9server
  • Displays pixel stream locally as a bitmap image
    or plot

53
Future directions
  • Explore distributed ds9-like image displays
  • design image server / viewer protocol to
  • Minimize round trips between client(s) and server
  • Support progressive/lossy transmission of image
    sections
  • Enable support of a web-based client/viewer
  • Enable support of a X-based client/viewer
  • challenges ds9 live plots, magnifier window
  • Test protocol with NIST Network Emulation Tool
  • Simulates network latency, bandwidth, jitter
  • http//www.antd.nist.gov/tools/nistnet/index.html

54
Keck 2 Remote Observing Room as seen from the
Keck 2 summit
  • Telescope operators at the summit converse with
    astronomer at Keck HQ in Waimea via the
    videoconferencing system.

55
Videoconferencing has proved vital for remote
observing from Waimea
  • Visual cues (body language) important!
  • Improved audio quality extremely valuable
  • A picture is often worth a thousand words
  • Troubleshooting live oscilloscope images
  • Cheap desktop sharing (LCD screens)
  • Chose dedicated versus PC-based units
  • Original (1996) system was PictureTel 2000
  • Upgrading to Polycom Viewstations

56
Interaction between video-conferencing and type
of monitors
  • Compression techniques motion sensitive
  • Moving scene requires more bandwidth
  • CRT monitors cause flicker in VC image
  • Beating of frequencies camera .vs. CRT
  • CRT phosphor intensity peaking, persistence
  • CRT monitor flicker causes problems
  • Wastes bandwidth and degrades resolution
  • Visually annoying / nausea inducing
  • Use LCD monitors to avoid this problem

57
The Keck Headquarters in Waimea
  • Most Keck technical staff live and work in
    Waimea. Allows direct contact between observers
    and staff. Visiting Scientists Quarters (VSQ)
    located in same complex.

58
Motivations for Remote Observing from the U.S.
Mainland
  • Travel time and costs greatly reduced
  • Travel restrictions accommodated
  • Sinus infections and ruptured ear drums
  • Late stages of pregnancy
  • Increased options for
  • Student participation in observing runs
  • Large observing teams with small budgets
  • Capability for remote engineering support

59
Santa Cruz Remote Observing Video Conferencing
  • Remote observers colleague in Waimea as seen
    from Santa Cruz remote ops

60
The Weather in Waimea
  • Remote observer in California points
    remotely-controlled camera at the window in
    Waimea remote ops

61
The Remote Observing Facility at Keck
Headquarters in Waimea
  • Elevation of Waimea is 800 meters
  • Adequate oxygen for alertness
  • Waimea is 32 km NW of Mauna Kea
  • 45 Mbps fiber optic link connects 2 sites
  • A remote control room for each telescope
  • Videoconferencing for each telescope
  • On-site dormitories for daytime sleeping

62
Mainland remote observing goals and
implementation strategy
  • Goals
  • Target mainland facility to short duration runs
  • Avoid duplicating expensive Waimea resources
  • Avoid overloading Waimea support staff
  • Strategy
  • No mainland dormitories observers sleep at home
  • Access existing Waimea support staff remotely
  • Restrict mainland facility to experienced
    observers
  • Restrict to mature, fully-debugged instruments

63
Mainland remote observing facility is an
extension of Keck HQ facility
  • Only modest hardware investment needed
  • Workstations for mainland remote observers
  • Network-based videoconferencing system
  • Routers and firewalls
  • Backup power (UPS) especially in California!!!
  • Backup network path to Mauna Kea and Waimea
  • Avoids expensive duplication of resources
  • Share existing resources wherever possible
  • Internet-2 link to the mainland
  • Keck support staff and operational software

64
The initial model for Keck Remote Observing
  • The control computers at the summit
  • Each telescope and instrument has its own
    computer
  • All operational software runs only on these
    computers
  • All observing data written to directly-attached
    disks
  • Users access data disks remotely via NFS or
    ssh/scp
  • The display workstations
  • Telescopes and instruments controlled via X GUIs
  • All users access these X GUIs via remote X
    displays
  • X Client software runs on summit control
    computers
  • Displays to X server on remote display workstation

65
Operational simplifications
  • Only one copy of operational software to maintain
  • Only vanilla hardware / software needed at
    remote site
  • Simplifies sparing and swapping of equipment
  • Simplifies system maintenance at remote site
  • Simplifies authentication / access control

66
Focus effort on X standardization and
optimization over long links
  • Maintain consistent X environment between sites
  • Optimize X performance between sites
  • Eliminates need to maintain
  • Diverse instrument software at multiple sites
  • Diverse telescope software at multiple sites
  • Coordinate users accounts at multiple sites
  • Fewer protocols for firewalls to manage

67
Remote observing differences Waimea versus the
mainland
  • System Management
  • Keck summit and HQ share a common domain
  • Mainland sites are autonomous
  • Remote File Access
  • Observers at Keck HQ access summit data via NFS
  • Observers on mainland access data via ssh/scp
  • Propagation Delays
  • Summit to Waimea round trip time is about 1 ms.
  • Summit to mainland round trip time is about 100
    ms.

68
Shared access and control of instruments
  • Most software for Keck optical instruments
    provides native multi-user/multi-site control
  • All users have consistent view of status and data
  • Instrument control can be shared between sites
  • Multipoint video conferencing key to coordination
  • Some single-user applications can be shared via
    X-based application sharing environments
  • XMX http//www.cs.brown.edu/software/xmx
  • VNC http//www.uk.research.att.com/vnc

69
Increased propagation delay to mainland presents
challenges
  • Initial painting of windows is much slower
  • But once created, window updates fast enough
  • All Keck applications display to Waimea OK
  • A few applications display too slowly to mainland
  • System and application tuning very important
  • TCP window-size parameter (Web100 Initiative)
  • X server memory and backing store
  • Minimize operations requiring round trip
    transactions

70
Shared access and control of instruments
  • Most software for Keck optical instruments
    provides native multi-user/multi-site control
  • All users have consistent view of status and data
  • Instrument control can be shared between sites
  • Multipoint video conferencing key to coordination
  • Some single-user applications can be shared via
    X-based application sharing environments
  • XMX http//www.cs.brown.edu/software/xmx
  • VNC http//www.uk.research.att.com/vnc

71
Fast and reliable network needed for mainland
remote observing
  • 1997 1.5 Mbps Hawaii -gt Oahu -gt mainland
  • 1998 10 Mbps from Oahu to mainland
  • 1999 First phase of Internet-2 upgrades
  • 45 Mbps commodity link Oahu -gt mainland
  • 45 Mbps Internet-2 link Oahu -gt mainland
  • 2000 Second phase upgrade
  • 35 Mbps Internet-2 link from Hawaii -gt Oahu
  • Now 35 Mbps peak from Mauna Kea to mainland
  • 2002 155 Mbs from Oahu to mainland

72
End-to-end reliability is critical to successful
remote operation
  • Keck Telescope time is valued at 1 per second
  • Observers wont use facility if not reliable
  • Each observer gets only a few nights each year
  • What happens if network link to mainland fails?
  • Path from Mauna Kea to mainland is long complex
  • At least 14 hops crossing 6 different network
    domains
  • While outages are rare, consequences are severe
  • Even brief outages cause session collapse panic
  • Observing time loss can extend beyond outage

73
Keck Observatory policy on mainland remote
observing
  • If no backup data path is available from mainland
    site, at least one member of observing team must
    be in Waimea
  • Backup data path must be proven to work before
    mainland remote observing is permitted without no
    team members in Waimea

74
Mitigation plan install end-to-end ISDN-based
fall-back path
  • Install ISDN lines and routers at
  • Each mainland remote observing site
  • Keck 1 and Keck 2 control rooms
  • Fail-over and fall-back are rapid and automatic
  • Toll charges incurred only during network outage
  • Lower ISDN bandwidth reduces efficiency, but
  • Observer retains control of observations
  • Sessions remain connected and restarts avoided
  • Prevents observer panic

75
Summary of ISDN-based fallback path
  • Install 3 ISDN lines (6 B channels) at each site
  • Install Cisco 2600-series routers at each end
  • Quad BRI interfaces
  • Inverse multiplexing
  • Caller ID (reject connections from unrecognized
    callers)
  • Multilink PPP with CHAP authentication
  • Dial-on-demand (bandwidth-on-demand)
  • No manual intervention needed at either end
  • Fail-over occurs automatically within 40 seconds
  • Uses GRE tunnels, static routes, OSPF routing

76
Running OSPF routing over aGRE tunnel
  • On each router, we configure 3 mechanisms
  • A GRE tunnel to the other endpoint
  • A floating static route that routes all traffic
    to the other endpoint via the ISDN dialer
    interface
  • A private OSPF domain that runs over the tunnel
  • OSPF maintains its route through the tunnel only
    if the tunnel is up
  • OSPF dynamic routes take precedence over floating
    static route

77
Fail-over to ISDN backup data path
  • If the Internet-2 path is up, OSPF hello
    packets flow across the tunnel between routers
  • As long as hello packets flow, OSPF maintains
    the dynamic route, so traffic flows through
    tunnel
  • If Internet-2 path is down, OSPF hello
    packets stop flowing, and OSPF deletes dynamic
    route
  • With dynamic route gone, floating static route is
    enabled, so traffic flows through ISDN lines

78
The hardest problem was the lodging!
  • LRIS operated from Santa Cruz all 5 nights
  • ISDN backup path activated several times
  • Observing efficiency comparable to Waimea
  • Lodging was the biggest problem
  • Motel check-in/check-out times incompatible
  • Required booking two motels for the same night
  • Motels are not a quiet place for daytime sleep

79
Fall-back to the normal Internet-2 path
  • OSPF keeps trying to send hello packets through
    the tunnel, even with Internet is down
  • As long as Internet-2 path remains down the
    hello packets cant get through
  • Once the Internet-2 path is restored, hello
    packets flow between routers
  • OSPF re-instates dynamic route through tunnel
  • All current traffic gets routed through the
    tunnel
  • All ISDN calls are terminated

80
Operational costs of ISDN backup data path
  • Fixed leased cost is 70 per line per month
  • Three lines at each site -gt 2,500 per site/year
  • Both sites -gt 5,000/year
  • Long distance cost (incurrent only when active)
  • 0.07 per B-channel per minute
  • If all 3 lines in use
  • 0.42 per minute
  • 25.20 per hour

81
Recent operational experience
  • Remote observing science from Santa Cruz
  • Low Resolution Imaging Spectrograph (LRIS)
  • Echellete Spectrograph and Imager (ESI)
  • Remote engineering and instrument support
  • ESI
  • High Resolution Echelle Spectrometer (HIRES)
  • Remote Commissioning Support
  • ESI
  • DEIMOS (see SPIE paper 4841-155 4841-186)

82
Unplanned use of the facility during week of
Sept. 11, 2001
  • All U.S. commercial air traffic grounded
  • Caltech astronomers have a 5-day LRIS run on
    Keck-I Telescope starting September 13
  • No flights available
  • Caltech team leaves Pasadena morning of 9/13
  • Drives to Santa Cruz, arriving late afternoon
  • Online with LRIS well before sunset in Hawaii

83
Extending mainland remote observing to other sites
  • Other sites motivated by Santa Cruz success
  • Caltech remote facility is nearly operational
  • Equipment acquired
  • ISDN lines and router installed
  • Will be operational once routers are configured
  • U.C. San Diego facility being assembled
  • Equipment specified and orders in progress
  • Other U.C. campuses considering plans

84
Administrative challenges scheduling shared
facilities
  • Currently only one ISDN router at Mauna Kea
  • Limits mainland operation to one site per night
  • Interim administrative solution
  • Longer term solution may require
  • Installation of additional ISDN lines at Mauna
    Kea
  • Installation of an additional router at Mauna Kea

85
Remaining challenges
  • TCP/IP tuning of end-point machines
  • Needed to achieve optimal performance
  • Conflicts with using off-the-shelf workstations
  • Conflict between optimal TCP/IP parameters for
    the normal Internet-2 path .vs. the ISDN
    fall-back path
  • Hoping for vendor-supplied auto-tuning
  • Following research efforts of Web100 Project
  • Administrative challenges
  • Mainland sites are currently autonomous
  • Need to develop coordination with Keck

86
Summary
  • Internet-2 makes mainland operation feasible
  • Backup data path protects against interruptions
  • Keck HQ is the central hub for remote operation
  • Mainland remote observing model is affordable
  • Mainland sites operate as satellites of Keck HQ
  • Leverage investment in existing facilities and
    staff
  • Leverage investment in existing software
  • Share existing resources wherever feasible
  • Avoid expensive and inefficient travel for short
    runs
  • Model is being extended to multiple sites
Write a Comment
User Comments (0)
About PowerShow.com