Category Archives: Academia

Scenario Planning and Job Pathways – Two tools to help you plan your career.

I recently published an article in EDUCAUSE Review on using scenario planning and job pathways to help individuals think about their career plans.  I suggest starting with scenario planning, with a focus on changing skills and how the workforce needs to adapt, to get a sense of possible future skills and careers.  This acts as an input into Job Pathway planning where you look at career steps you could take and the skills needed to take each step.  Here is a link to the article if you would like to read it in full:

Scenarios, Pathways, and the Future-Ready Workforce

 

Architecture and finding the path

Ron Kraemer, our VP of Information Technology and CIO, spoke at the IT Leaders Program this week. He built on his blog post, Interdependence – Both Positive and Negative. To paraphrase:

The growing interdependence of our systems is driving the complexity of our systems towards the edge of chaotic systems. The choices that we make are no longer focused on finding the perfect solution. Instead, we can see many possible solutions, many of which are good solutions. The choice is then to pick the solution which builds positive interdependency and limits negative interdependency.

Interdependency and Complexity

Fig. 1: Growing interdependency has put us at the edge of complex and chaotic systems.

In his talk at ITLP, Ron also pressed on the ever-growing rate of change. These two factors limit our ability to design and implement perfect solutions to problems. To paraphrase again:

If you take two years to design a great solution, the landscape will have changed so much that the solution may not be applicable. The level of complexity makes finding and defining the perfect solution even more difficult. The level of interdependence means that even more good solutions are available – when many systems are connected, many systems could be used to provide the solution.

Impossible Route to a Perfect Solution

Fig. 2: Impossible Route to a Perfect Solution

I agree with what Ron has come to believe. The level of integration between systems is very high. The expectation for real-time interactions has become the norm. Users expect to see real-time flight information. They expect real-time updates on openings in courses. Students can see, in real-time, the bus schedule, where they are located and the location of nearest bus stop and the location of the buses on their routes.

This interdependence has driven complexity to the point where perfect solutions are hard if not impossibly to design and deploy. Therefore, we must choose from many good solutions that exist. We need to act quickly to implement some solution to meet the rapid rate of change.

Many good solutions

Fig. 3: Many good solutions

This is where Enterprise Architecture and the other architecture practices can help. If we look out to the future and think about the desired state, then we can begin to sift out those good solutions which move us towards that future state. For us, we had stated that Service Oriented Architecture was a strategic direction. That bounded the future state some. In the student area, we had a future state process diagram. This diagram outlined improvements to the way that students manage course data and move through finding courses to enrollment. This put another boundary on the future state. When it came to think about how we get course roster type information out to a new learning management system (Moodle), we were able to use that projected future state to pick from the possible solutions (flat file transfer, shared database connections, web services) those which moved us closer to future state.

Architecture filtering the good choices

Fig. 4: EA can help filter the good choices that move you towards the desired future state.

The rate of change and interdependency drives the importance of an architectural approach. If you have not thought about the future state, then there is a multitude of choices. To pick from many choices, you have to establish some factors that affect your selection. In a restaurant, this might be dietary restrictions, cost, the weather outside. In technology, it is often quickest and cheapest. But those factors, in this complex environment are often shortsighted and misguided. The quickest and cheapest solution might need to be replicated many times for many systems. This would increase the interdependency in a negative way and push you even closer to a chaotic system. A more expensive, slower solution might serve you well over the long haul.

Architecture can help you make those choices in a framework that is focused on the future and on the overall complexity that you are creating. Enterprise Architecture (and the other architecture practices) can help sort those good solutions and help make sure the choice you make is along the path to desired future state.

SOA – Maturity is Key Presentation, EDUCAUSE Enteprise 2009

My presentation on SOA in the Enterprise – Maturity is Key has been posted in a couple of places.

First, on the EDUCAUSE site is the talk listing:

EDUCAUSE – Enterprise 2009 Site

Slides can be found at Slideshare.net:

Blue Sky to Ground part 1

 

 

Soaring

Soaring

I’ve been working with our CIO on the I.T. strategic planning initiative.  At the same time, I’ve been working with the Technical Directors and Operational Directors on planning at the technology level.  They have been creating a map of what technologies are used to support our services.  I’ve had my head in the blue sky of the strategic planning process while I’ve also had my hands in the dirt of the technology mapping.   I keep coming up against the issue of how to connect the blue-sky of the strategic plan with the down-in-the-dirt technology planning.

Finding a process and methodology to connect the sky to the ground has taken up a lot of my mental cycles recently.   The following is my take on a method to connect the strategic planning to the technology planning. 

1.  Strategy to Capabilities

The first step is to take the general directives of a strategic plan and have them expressed in terms of capabilities.   I see this work being done by leadership as part of a collective planning exercise.   As an example, a strategic initiative might be: Classrooms and learning spaces will be equipped with a base set of instructional technologies.   This strategic direction then needs to be interpreted into a set of defined and measurable capabilities.    A leadership team would be charged with determining the capabilities that would meet this strategic direction.  The capabilities should be measurable.

For example, the capabilities might be:  Multimedia Projection, Student Response Measurement and Lecture Capture

We could survey all rooms and learning spaces and get measures of current state (for example: 65% of rooms meet the projector capability, 15% meet the student response and 10% meet the lecture capture capability).   We could then decide priority – which is more important lecture capture or student response – act on those priorities and measure improvement.

2. Capabilities to Services

The next part of this to map our services to the strategic capabilities.  Some services support multiple capabilities (Hosting Services, Identity Management Services for example).  Some capabilities may not have a supporting enterprise service.  A capability that does not have a set of supporting services might indicate a gap in the enterprise.  For example, there may not be a matching Lecture Capture Service that provides the Lecture Capture capability.  This might be done in an ad hoc fashion or it might be missing completely.  This gap in the enterprise service would be worth evaluating to see if the capability is being delivered effectively in the current structure.  If not, then we might want to look at developing an enterprise-wide Lecture Capture Service that supports all of the classrooms.  

3.  Services To Technical Roadmaps

This is where we use the brick diagram in our planning.  The brick diagram captures the technologies that support a given service.  The brick captures what is current state (those technologies currently in use), what is tactical (what will be used for the next 0-2 years), what is strategic (on the plans to use 2-5 years out), what is in containment (no new development), what is in retirement (being stopped) and what is emerging (interesting trends that may move into the tactical or strategic realms in the future).  

These brick diagrams are created and maintained by the service owner – that is the group that manages the service being provided.  The bricks let the service owners and the service teams grab a snapshot of their current state and their strategic plan for the next few years – what they will leverage, what they will stop, what they are watching and what they want to move to – in a simple format.

 

Core Planning Stack from Tech to Strategy

Core Planning Stack from Tech to Strategy

This set of relationships is managed by a set of governance process that define and prioritize the layer below.  

At the lowest level, the service manager or service team usually defines and prioritizes the technology they use to deliver that service.   This is the layer that is captured in a brick diagram.  They should also describe the capabilities that are delivered by their service and which strategic directions they support.  

At the top level, senior leadership should work to refine the strategic directions as measurable capabilities that want to see delivered.  

The mid-level governance is a gap in our institution.  It is probably filled by project prioritization processes and budget processes.  I’ll talk about that in part 2 of this post.

Advanced CAMP – Part 2

Dave Gimpl:  Computing as a Service

Infrastructure for vaporware.  They are working on the infrastructure that enables cloud-computing.

Challenges in the data center:  rising costs of the operations, the explosion of data, the difficulty of deploying new application and services, the difficulty in managing complex virtual machine systems.  When you map the business processes, they map to a variety of systems on the data center floor.

Blue Cloud is IBM’s entry in Cloud Computing.  Cloud Computing is holistic systems management.  Similar to Grid or Cluster computing.  A combination of “pervasive virtualization” for both server and storage.  Allows for virtualization across varied hardware (I think).  On demand and autonomic management and Utility Computing (Amazon’s service offering).

They gather up like systems (not necessarily identical) and manage them as a pool.  The focus changes from managing the SAN or server.  You let the “ensemble” manage itself and you manage the Virtual Image.

When the image moves to another system, does it move with state?

North Carolina State’s implementation is open source.  All of the standards are open source.  The ensembles are wrapped with SOAP/SOA interfaces.  At North Carolina State Virtual Compute Lab – a student can request a XP machine to do their project.  They get the machine in increments of 30 minutes.  They are providing service for other institutions in their area.

Ken Klingenstein mentions a paper “The Computational Data Center: The Science Cloud”

Mark Morgan:  Genesis II – Accessible, Standards Based Grid Computing

http://www.cs.virginia.edu/~vcgr

The problems:  we have target grid user that are unable or unwilling to learn new programming tools & paradigms.  Users want the benefit of the grid without having to know about the grid.

Anything you can put a service in front of and put on the internet, is part of the grid.  Telescopes, microscopes, computing power, storage, data, sensors.

Want to share this but sharing in a mutually distrustful domain.

Genesis II implements the standards that come out of the OGF (Open Grid Foundation) to test them and vet them.  Open Grid Service Architecture is part of the OGF.

Grids have been around for a long time but they are being used.  People who design grids want cool features.  User don’t care.  Genesis II is focused on the user and making grids usable.

The Specs:

  • Resource Naming Service (RNS) –  maps human-readable name to web service endpoints.  Supports Add, Remove, List.
  • ByteIO – allows you to treat grid resources like a POSIX-like file resource.
  • Basic Execution Service  (BES) – interface for starting, managing and stopping computing jobs.
  • WS-Naming – Endpoint Identifiers, Enpoint Resolution

You interact with the grid system in “file-like” ways.  Double click on a database query, drag a job onto a server resource, etc.

They use an FTP interface to manage resources on the grid.  On linux side, OGRSH acts as an intermediary between bash and the grid.  Users can do “ls”, “cat”, “cp” and OGRSH will redirect requests into the grid as appropriate.

Nigel Watling: Cloud Computing and the Internet Service Bus

http://biztalk.net

Building out a new data center in Chicago.  Microsoft is deploying 10,000 servers a month to support cloud computing.  Amazon expects their services operation to bypass the retail business soon.

Issues that come up:

  • How do I expose a service broadly?
  • How do I handle identity and access control
  • How do I interoperate?  Between vendors?  Between standards?

Connect their composite application through an ESB to the internal applications and then out to the cloud for distributed resources.

Roland Hedberg:  OM2

http://www.openmetadir.org

OM2 is about representing events and moving information about events from one place to another.  A publish-subscribe messaging system originally designed around IdM.  Implementations in Python, Java and PERL.

Three ontologies:  message, operation and object ontologies.  Message is the header like for mail.  Operation describes the actions (Miro ontology) which includes if-then-else as well as the usual add, modify, etc.  Objects describe the objects.

Messages are based on RDF/XML.  Includes support for Dynamic delegation Discovery System (DDDS, RFC 3401-3).

“Ontology Driven Application Development.”

Example applications:

Eduroam (http://www.eduroam.org) : allows you to travel between universities throughout Europe and use your local credentials to authenticate to the wireless network.

Bologna Process: supporting the movement of students between universities.  Any student should be able to go another university and take a class then come back.  Has admissions control and grade reporting.

What OM2 does:  Transport the information to the correct address at all time by the use of DDDS, by the transport protocol of the receivers choice.

Brian Busby:  ESB at UW-Madison

Talk about our use of the ESB and experience with SOA.

UW-System has been looking at SOA for years (4 or 5 years).  We got to where we were going to buy a commercial SOA suite but we passed on the purchase.  SOA went into hibernation.  Then two projects came along:

  • Course Roster Information Service
  • Course Guide

We made a decision to take advantage of a license for the Cape Clear ESB.  We can take advantage of this.

Interesting impact:  people suddenly had to change their discussion to be around services that they need not big data loads or APIs and they made the change.

Issues:

  • Right-sizing the environment – we don’t know how many people are going to be using the ESB or the load on the services.
  • ESB as a service hosting facility
  • Collaborate development teams (Integration Competency Centers)
  • What aspects of integration should the ESB handle – do you put all the business logic in the ESB, etc
  • Support of the loosely coupled environment

Organization Issues:

  • Governance
  • Ownership of the services, orchestration, operational data stores
  • Security policies
  • Web services granularity
  • Data representation – what XML should we use to represent data
  • Service Level Agreements
  • Service definition & re-use

The fact that we got the ESB in place is driving the conversations that we were having years ago forward finally.

Technorati Tags: , , , ,

Advanced CAMP – Registering, Discovering and Using Distributed Services Part1

R.L. Bob doing the introduction: 

Advanced CAMP could mean to some people the advanced topics beyond just the basics.  Bob likes to think of it as the Advance Camp out in the wilderness where you are more likely to get caught in a blizzard, get shot and generally face the wilderness.

The theme that came out was the needs around service discovery in higher education.  Discussions will cover CyberInfrastructure for Humanities, Cloud/Grid, SOA, ESB.    Discussion groups on data models, governance, service discovery and <your topic here>.

Workshop Format:  Each participant should offer (at least):  1 opinion, 1 rant, 1 hope, 1 keen observation.

The problem space:  SOA is happening across academia in variety of ways varying from Web2.0 apps, mash-ups, messaging.  It happens intra and inter-institutional.  This impacts how we offer a variety of services and raises a set of questions:

  • How should digital tools and data for scholarship be made available?
  • What metadata should be recorded about them?
  • How can metadata be globally aggregated and searched?
  • What operational and security environments should protect them and enable their appropriate use?
  • how should their semantic relationships be codified and maintained?

Mark comments:  connecting metadata to the object and having it persist and stay attached as the object moves around and is copied is a difficult area to address.

Jill:  SOA is also talked about traditional administrative system but do people think about this

Why would academics would want to store their content in a central system?  It might be about the ability to add metadata and re-use the content in multiple places.

Loretta Auvil:  SEASR

http://seasr.org/

Goal was do develop a software environment that would allow for the reuse of software components focused on data mining applications for the humanities.  Looking at text analysis and music analysis doing genre analysis, mood analysis. 

The components and descriptions of those components are very web centric based on SOA and Semantic Web.  They are talking about a Semantic Enabled SOA.  The components are written in RDF.

Looking at interesting ways of searching:  Tag Clouds, Link Flows

Working on a workbench using Google Web Toolkit.  Allows you to do a mash-up of the components into flows.

Example Applications:  MONK – it has a custom UI that calls SEASR as a service.  NEMA – music analysis service that does 10 second slices of an MP3 looks at the genre and mood. 

Steve Masover:  Project Bamboo

http://projectbamboo.uchicago.edu/
Flickr and del.icio.us tags:  projectbamboo

Asking the question:  How can we advance arts and humanities research through the development of shared technology services?

Areas of focus

Discovery and Analysis
Annotate and manage – including the idea of Folksonomic tagging with identifiable levels of authority.

Need to support serendipitous discovery.  Search is not useful if it limits serendipity and foraging.  Intellectual Property pain and accelerating interdisciplinary are motivate “commons-based peer production” (cf. Yochai Benkler) .  There is impatience with copy-write.  There is desire to support inter-scholar relationships.  Community / Networking that support a “lattice of interest”.  Legal and institutional policy are trending towards advocacy around fair use in law.

Emerging aspects of scholarly practice include: shared standards and services, social and scholarly networks, deep consortia across disciplines and national borders.  There is need for a chain-of-credibility in mash-ups.

Looking less on service/tools developments and more on standards-profiling and services to facilitate interoperability.  One area that they might focus on the sharing / tracking of reference use:  who used a resource in what context and for what purpose, who provided the resources to the commons.

We are moving from a wedding cake stack (data and repository, middleware, application on top) to a three-side figure with mash-ups and tools on edge of the triangle.

Ken K – we heard from an English scholar that he is does not do “team english.  He is a cat and he does not want to be herded”. 

There is a tension between scholars wanting to know “who is using their stuff” and but not wanting to their activities monitored.

Daniel Davis:  Fedora Commons

http://www.fedora-commons.org/

Now a 501-3c organization.  Moving from an internal grant-funded project to a community project.

Much of the work is focused on integrated services from other projects rather than re-writing code that already exists.

Splitting into multiple projects: 

  • Fedora Repository – original Fedora Project,
  • Middleware – looking at seamless integration between other groups’ services,
  • Akubra Storage – new storage plug-in architecture, transaction file system,
  • Topaz – core components for semantic-enabled apps currently publishing several journals mostly in medical research,
  • Mulgara Triplestore – highly scalabel triplestore.

Relevant technical trends:  SOA, Web2.0, RDF, OWL and OWL-S

There are two paradigms that we are dealing with:  the lightweight Web model with little trust / security and the Enterprise model where you have deep trust / security models (think HR systems).  A repository can bridge these two worlds.  You can easily repose content then add a trust  model and policy driven controls for adding scholarly information on top of the content. 

The Enterprise paradigm need to support near ACID (atomicity, consistency, isolation, and durability) semantics and a strong security and trust model. 

Question:  The idea that there is a difference between Federated Identity and Federated Repositories and how that would work.    They are different aspects but related. There are discussions about shared information between the repositories like User Accounts.  In one repository, that person might be an account.  In the other, they might be a reference.  How much do you share between the two repositories.

Jens Haeusser:  Kuali Student

http://www.kuali.org/communities/ks/index.shtml

Keys:  Modular, standards-based student system.  Community Sourced rather than open source in that their is a board who sets direction and manages the roadmap.  It is a person centric system – focused on meeting the needs of the users of the system.  SOA-based.

Traditional ERPs – you tend to implement twice.  Once, when you try to make it meet your current practices and then again when you accept the best practices as defined by the vendor.

Functional Vision:  Support the end users by anticipating their needs.  Support a wide range of learners and learning activities (traditional students but also life-long learners, distance learners, exchange students et al).  Design to make it easier to change business processes.  Reduce time staff spend on routine tasks.

Technical Vision:  SOA and Web Services.  Not delivering an application as much as they are delivering a framework for you to deploy your business processes.  Using the Web Services stack:  Standards-based, adhere to Educational Community License (ECL).  Building the system in Java.  Open Source reference Implementation.

Guiding Principles for the KS Technical Architecture as a PDF

The functional design team is gathering input from a broad range of players from both within an institution as well as between institutions.

The first thing they are working on is Learning Unit Management.  Treating it more like SKUs.  You can compose them together to make larger units.  They have learned that the current way many systems define courses isn’t very good.

Technical Recommendations as a PDF

Database:  Apache Derby
Orchestration:  Apache ServiceMix, Sun OpenESB, Kuali Enteprise Workflow (KEW)

Created a standard development environment that includes a submission environment.  Maven and Subversion, Google Web Toolkit (UI).  Business Rule Management System (BRMS) to store and search for business rules includes a UI for business users to define the rules.  Looking at the Fluid Project for support of accessibility/usability requirements.

They are using different ESB for different aspects of the framework. 

Technorati Tags: , , , , ,

Measuring the value of projects

Jason Uppal of Quickresponse gave a talk on Building Enterprise Architects at the Open Group’s Enterprise Architecture Practitioners Summit. He mentioned that Toyota judges project success based on

three corporate objectives:

Profit from the Program
Market Share
Learning

These facets got me thinking about our post project reviews. We tend to measure our projects on whether or not they were done on-time and under-budget. We have post-project reviews that ask, “how could we run projects better in the future” but they are focused on the project process. We don’t really evaluate the project on a set of facets. So we evaluate “What” and “How” but not “Why”.

As I think about this, I think the interesting facets for us would be:

  • Did this reduce costs over the long run – e.g. have a reasonable ROI
  • Did this “improve” the enterprise architecture – did it reduce redundancy, reduce complexity, advance strategic initiatives
  • What did we learn about the enterprise in the process?

Technorati Tags: , , , , ,