Plum and Raspberry Galette with Lemon Ricotta Filling

Plum Raspberry Galette

Plum Raspberry Galette

I love making galettes – a french free-form tart.  This is a peak of Summer galette made from local plums and raspberries.  Once you get galette making down, you can create galettes with lots of different fillings:  Apples and toasted pecans, peaches with almonds, blueberries with lemon.  This takes few hours to make but much of that time is waiting for the galette dough to chill or for the galette to bake. 

Preheat oven to 385 F

Ingredients:

1 pound Plums (firm ripe)
1 C fresh raspberries
8 oz. Ricotta
Zest of 1 Lemon
1 Egg at room temperature
1 Tbl cornstarch
2 Tbls (plus more for dusting) Powdered (Confectioners) Sugar
1 Tbl melted butter
3 Tbl Turbinado Sugar

1 disk Galette dough (see the recipe below).

Put the ricotta into a fine strainer or inside of a piece of cheese cloth and let it drain for 30 minutes or more.  Mix the lemon zest and 1 Tbl of confectioners sugar into the ricotta.  Taste the ricotta mixture.  It should be balanced between sweet and salt and taste lemony.  If it needs more sugar, add a bit more to bring the sweetness up.  Once it tastes like you want it to, mix the egg in well.

Lemon zestRocotta draining

Slice the plums into 1/4 inch wide slices.  To pit a plum, slice all the way around the outside of the plum from the top to the bottom.  Slide your knife into the plum at the top then turn the plum over cutting the plum in half all the way to the pit.  Follow the seam along the outside of the plum where the two halves of the plum grow together.  Then twist the two halves apart, back and forth, gently until one side breaks free from the pit.  Slice this half into 1/4 wide slices.  Cut the other half the plum in half again.  Do the twist trick once more until one quarter of the plum breaks free of the pit.  Cut the pit out of the remaining quarter off of the pit or pull the pit out with your fingers. 

Add the slices to large bowl.  Add 1 Tbl cornstarch and the remaining 1 Tbl of confectioners sugar.  Add a pinch of salt and mix.  Taste your plums, if they are tart, you might want to add more confectioners sugar.

Plums

Roll out the galette dough into 16“ wide disk and trim to a circle.  Slide the dough onto a piece of parchment paper and then slide the parchment and dough onto a baking sheet.

Drop small pieces of the ricotta mixture into the center of the dough leaving a 2 inch margin around the outside edge of the dough.  Lay the plum slices on the ricotta.  If you making this for a fancy party, you can arrange the slices in concentric circles.  If you want quick and simple, just pile it all inside.  Make sure to leave a 2 inch margin of dough so that you can fold it over to make the pleated edge.

Assembling Galette

Fold the edges of the galette dough over the sides of the fruit.  There are several ways to pleat a galette dough.  See this Fine Cooking article for details.

Brush the dough with the melted butter.  Sprinkle the turbinado sugar over the buttered dough and across the top of the galette.
Assembling

Bake the galette for 30 minutes turning once after about 20 minutes.  Sprinkle the raspberries across the top of the galette and bake for 15 more minutes.  Pull the galette from the oven when the dough is nicely browned.  Slide the parchment paper and galette off onto a cooling rack and let cool for 10 minutes or more.  Dust the top of the galette with confectioners sugar and serve.  You can serve this with vanilla ice cream or creme fraiche (whisk lemon juice and confectioners sugar into the creme fraiche) .

Finished Galette

Galette Dough Recipe

5 3/4 oz. (1 1/4 cups) All-purpose Flour
1 Tbs. Sugar
1/4 tsp. Salt
4 oz. (8 Tbs.) well chilled unsalted butter cut into small cubes
1/3 cup Ice water

Add the dry ingredients to a food processor and pulse several times to mix.  Add the butter and pulse a few times.  Do not over mix the butter.  There should still be pea sized chunks of butter in the dough.  Do not mix until it looks like corn meal.  Add the water all at once and pulse a few times until the dough starts to come together.  It will not come together into a ball.  It will still be crumbly and will seem under mixed but don’t worry it will come together in the fridge.  Pour the dough onto a sheet of plastic wrap.  Gather it up and form it into a disk.  Wrap tightly and put in the fridge for two hours. 

Technorati Tags: ,

Tired, Grumpy, Fuzzy and Twitchy

Lake Mendota, bikes and boats

Lake Mendota, bikes and boats

I’m sure that sport psychologists / physiologists have a name and maybe a reason for these feelings…
I’m getting ready to ride the Door County Century this weekend. This means that I have spent the last couple of months riding longer rides and building up time in the saddle. I was up to about 190 to 200 miles a week two weeks ago. I was also working out with a personal trainer twice a week. In short, I was getting a lot of exercise – 15 plus hours a week.
As this weekend approached and the up-coming century ride, I started to taper off my workouts. I dropped my twice-a-week personal trainer moving down to once a week last week and none-a-week this week. I’ve backed off the miles that I bike each week.
I’ve noticed that, as I taper back on my workouts, I get twitchy and anxious feeling but it is mixed with fuzziness and sleepiness. I’m also kinda grumpy which (I think) is unusual for me. It is an unwholesome combination of lack of mental focus mixed with an over-caffeinated kind of buzz and a lethargic desire to nap for hours on end. I’m a bit concerned about the end of the biking season which coming up soon due to lack of light, too much cold and then snow. I’ll need to ski a lot this Winter and find another indoor endurance exercise (swimming?) for those long Winter months.
On the other hand, the rest has felt good. My shoulders, neck and hamstrings were starting to complain about all the work they were doing. But then again, all this exercise has meant that I could eat well and still drop weight.

Garmin EDGE 705 – Bugs, Bells and Whistles

I got a new Garmin EDGE 705 bike computer about 6 weeks ago. I’ve been riding 3 or 4 times a week with Garmin and have synced to several applications and a web site. The Garmin EDGE 705 has great bells and whistles but the basic function, turn-by-turn directions, is buggy and unreliable.

What I bought: I bought the Garmin Edge 705, with the Heart Rate sensor, Speed/Cadence sensor & Data Card with Street Maps (SKU 010-00555-40). It came with version 2.2.0 of the firmware. I have also tried version 2.3.0 and 2.4.0.

What I like:

Installation: I love the fact that there is a single sensor that picks up both speed and cadence. The sensor is also sensitive so you don’t have to set it extremely close to the pedal or wheel for the device to work. The Garmin EDGE 705 discovers the peripherals automatically and flawlessly (at least for me. Others on the forum have talked about cadence problems).

Set Up: There are a lot of menus to cycle through to set up the device. This is a mixed vote from me. I like the ability to set up how each screen looks (how many data fields are show, what information is displayed in each data field, etc.). I have had to dig to find settings and I know that someplace I set the minimum speed for autopause. I have yet to figure out where I set that so I can change it.

Post Ride Data Analysis: This is where the bells and whistles ring out. The device syncs brilliantly and easily (for me, YMMV, see the Motion Based Forums) to the Garmin software on my Mac. It also syncs to the MotionBased web site (see the list of my rides in the sidebar on this site). I also bought Ascent from Montebello Software. The default Garmin software provides basic analysis of your ride data. MotionBased and Ascent provide detailed analysis some of which is pretty cool.

What I don’t like:

Turn-By-Turn Navigation: Supposedly, you can load a GPS Track File (in GPX format) into the Garmin. You then tell the Garmin that you want to follow that track. The Garmin will navigate you around the route. Supposedly. I have tried to get this to work a half dozen times. I have created GPX Track files in GMap-Pedometer, Google Maps and MapMyRide.com. I have tried making sure that the start and end points aren’t near each other.

This has never worked correctly. I’ve had the device start to tell me to make u-turns in the middle of my ride. I’ve had the unit tell me to make a turn 5 miles early, then shut off. I’ve had the unit say that I should cut through a barn and corn field though I preferred to stay on the road.

I do have hopes that Garmin will patch the software so turn-by-turn works. Garmin does seem to be responsive to their users and they do seem to issue patches regularly.

Managing the Buttons: You need to push and hold the power button to on the Garmin. You need to push the timer start at the beginning of the ride. You must push timer stop at the end of the ride or the Garmin will keep recording even though your wheel isn’t turning. The Garmin added the drive back from one ride to my total ride. I could hear it chirping away as I drove home. Compare that to my simple CatEye computer that just starts and stops on its own or my Polar that I needed to push start but it could figure out the ride was over all by itself. It feels like I need to pay more attention to managing my cycle computer than I really want to.

Software Updates, Syncing et al: All of this works flawlessly (so far) but it is another device that gets software updates and that you need to sync to your computer. It is fine but just another digital device to fuss with.

Conclusion:

The set-up is easy. The unit will automatically calibrate for wheel size and speed. The post ride data analysis is great. It makes it dead simple to keep a work-out log. The turn-by-turn doesn’t work so I still ride with a paper map to navigate by. I would love to be able to rely on this device for navigation when I’m riding. It is fussier than other computers that I have used but the post-ride data analysis is a beautiful thing.

Jim’s Fire and Wine Scallops Recipe

Scallops with Green and White Bean salad

Scallops with Green and White Bean salad

The sauce and spice mix add a little heat and sweetness to the already sweet scallops. This recipe takes about 15 minutes to prep and 10 minutes to cook. I served these with a cold Green and White bean salad and crostini and an Italian white wine. This serves two as a main course or four as an appetizer.

Ingredients:

  • 1/4 C minced sweet onion (Walla Walla or similar)
  • 1 Tsp. Fresh Thyme leaves or chopped Thyme tips
  • 1 clove of garlic peeled
  • 1/2 C. White Wine – medium dry
  • 12 Oz of large dry scallops or the closest even number (8 in this dish)
  • 1 Tbl each Olive Oil and Butter
  • Penzey’s Northwoods Fire spice mix or the mix below
  • Salt and Pepper to taste

Instructions:

Mince the onion and either pick the thyme leaves or chop the thyme tips. Lightly dust the scallops with the Penzey’s Northwoods Fire mix or lightly dust with chili powder, smoked paprika, dried thyme, salt and pepper. Heat a 12″ non-stick pan over medium high heat. Add olive oil and butter and heat until butter stops foaming and turns lightly brown.

Onion and Thyme
Dusted ScallopsOlive Oil and Buttr

Add the scallops and the whole garlic clove to the pan but do not crowd the pan. Let the scallops sear on one side for 3 minutes. Turn the scallops and add the thyme and onion. Stir the thyme and onion into the oil. Let the scallops sear for 2 to 3 more minutes then remove to a warm plate.
Add Scallops and GarlicTurn add onions

Add the white wine and turn the heat up to high. Scrape all the brown bits off the pan and stir while you reduce the wine by half

Add wine.

Plate the scallops. Taste the sauce and adjust the seasoning with salt and pepper if needed. Pour the sauce over the scallops.

Frazz Hair

I get Frazz Hair when I bike. I view Frazz Hair as a measure of the quality of the ride. Good Frazz Hair means I had a good ride. This is an example of good Frazz Hair:
Frazz Hair

This hair came from a beautiful ride to Paoli, WI after a Summer rain storm. The roads were dry but it smelled like a Summer rain and wheat fields and the herbaceous scent of prairie flowers in bloom.

We were riding fast – that helps pull hair up into the vents on my helmet and make it all spiky. We were working hard so there was plenty of heat and moisture to steam-set those spikes. It was a long enough ride to give the Frazz Hair plenty of time form and build to the beautiful example you see above.

It’s a good day that ends with really good Frazz Hair.

Advanced CAMP – Part 3

Merri Beth Lavagnino – Privacy and Policy

Policy and privacy are really consideration of the human aspects and impacts of technology.  Policies are: strategic direction and operating philosophy (which are usually informal and cultural), Public and Institutional policies (these are both documented and usually legal documents).

Institutional policy – a statement that reflect the philosophies and values of the project, service, organization or federation.  Policies should be clear and concise, applicable across a wide range of activities and should not change very much.

Why create a policy?

  • When reasonable people disagree
  • To guide thinking when making decisions
  • To correct repeated misbehavior
  • When there are significant risks or liabilities
  • In response to external forces like regulation or law

Where does the policy apply?  Federation, Institution, Service

Real-life stories:

  • Email Outsourcing:  vendors proposed that we would do incident response and legal requests for both students and alumni.  There was no policy that said they had to be in charge and n control.  She took the discussion back to the original goals for the project. (1) Improve and add services for students and (2) reduce their costs.  So they did not take on the incident response because that would not reduce the costs.  That was the policy that helped inform the decision.
  • Course Management System:  they changed their course management model.  They began to get incident reports because the new service didn’t match the old policies for the previous system.
  • Virtualization:  They moved to a new virtualized systems.  The old policies where around knowing that super-hot data is on a specific machine, with a specific system admin.  Now, they didn’t know what machine had the data and all sys admins might have access.  Had to expand training and the understanding of how they would manage super-hot data.
  • InCommon Agreement:  Thought that went very well.

“A policy is a temporary creed liable to be changed, but while it holds good it has got to be pursued with apostolic zeal.”  Mohandas K. Gandhi

Privacy:

Categories of privacy harms:

  • Intrusions : They come into your space and contact you and tell you what to do (spam, cold calls)
  • Information Collection:  They watch what you are doing more than they should (tracking, interrogation, etc)
  • Information Processing:  They have a lot of data about you, and they do things with it. (data mining)  Need to watch out for secondary use – collect for one reason then use it for another reason.
  • Information Dissemination:  They disclose data about you, perhaps more than you think they should.  (Transferring data, true or false facts)

Fair Information Practice Principles:  The FTC drafted these principles and they do enforce them.  Higher Ed is not under the FTC’s jurisdiction but users are expecting these principles to be met.  If we don’t

  • Notice/Awareness:  User should be given notice of your information practices, in order to make an informed choice about whether to provide information.
  • Choice/Consent:  User should be given options as to how any personal information collected from them may be used.
  • Access Participation:  Users should be given access to the data held about them, and ability to contest that data’s accuracy and completeness.
  • Integrity/Security:  data should be secure and accurate
  • Enforcement/Redress:  there should be a mechanism in place to enforce fair information practices and it should include appropriate means of recourse by injured parties.  At a minimum, you should right the wrong.

Ken Klingenstein: Federated Identity and Data Protection Law

Good quote from Ken K:  “This is an attempt to bring trust to internet via technology not just because it is just us chickens”.
EU Law Directive 95/46/EC :  You can process personal data when it is required to perform contact, required to satisfy legal duty or consent.

Identity Providers must identify which services are necessary for education and research.  Must inform the users.  May seek users’ informed freed consent to release personal data to other services.  You have to show why it is important.    Should have a data process/data controller agreement with all service providers to whom personally identifiable data is released.  Must ensure adequate protection of any data released to services outside the EU.  We have to play by the EU rules.

Service Providers must consider whether personally identifiable information is necessary for their service or whether anonymous identifiers are sufficient.  You may request personal information from users but you must inform.

There is no normalized definition of what Personal Identifiable Information (PII).  There are questions about email addresses:  if it is a third party email address it might not be but a .edu address might be.  So the content might be more important than the field.

IP Addresses – if it is a dynamic address it is not PII.  So, unless you know it is a dynamic address, then you have to treat it as PII.

EduPerson Targeted ID – this is going to the EU privacy commission this Fall.  It is a 32 bit opaque identifier that is different per site visited.

OASIS Cross-Enterprise Security and Privacy Authorization (XSPA) – just formed group.  A mechanism to allow consent agreements flow with data.  The first and dominant Use Case is health care.  Looking for other Use Cases.  Does this make consent a new service in our loosely coupled service?  Do services need to be consent aware?

Report Out from Discussion Sessions:

Data Modeling Group:

Modeling person and organization data.  Modeling of organization data is remarkably difficult not just in the nature of the data but also in the resistance that you get from organizations to being characterized.  Multiple organization charts – financial, hr and reporting structure.  The characterizations can be political.  Are there pressures that will lead to the marginalization old way of doing things?  Organizations that don’t want to be characterized may not get services.

Service Discovery:

What would a service description look like:  what is it called, cost, how to call it, operational context (where is it physically located).  Discussion about how you describe the service, how do you recognize similar services in distributed locations.  Talked about the grid is doing this with their RNA.

What is happening today: people using Google to search for services and looking for a WSDL.

How do you get consent?  What about promises and claims?  What about a directory of all the services?  What about a directory of directory?  You could have a convention for naming the directory so you could at least find the directories.

DNS works for finding things.

Governance:

Domain Governance – governance revolves around an application or a data element, or attribute (student ID).  These models will have to evolve to domain governance: enrollment, IdM etc.

Who owns the data especially as the data is transformed and sent along the ESB?  Services are requesting the data that can then be used by other services.

SLAs – keeping tracking of who can use the use the service.

The need for a directory of services especially in emergency notification.  There is also a need to know who is consuming services so you can notify on changes.

What is being done now on campuses?  It is evolving on campuses.  Identity and Access Management is a domain that is being governed  as a domain at Penn State.

Saint Louis University has a good examples of domains in higher education that need to be governed as a domain.

Lightening Talks:

Rob Carter:  Tracking and Authenticating IP in Cyberspace

We had all of our resources stored inside the walls of the institution.  We now see with cloud computing and Web 2.0 applications, our intellectual property out in the cloud.  How do we track the reuse of them?  How do we contextualize the content.

How do we know that it is really and artifact of mine and not someone spoofing my creations?

Could solve this with digital signatures.  What if we could add metadata before it goes out into the cloud.  Get a signature of the object and attach the signature to the object or store it elsewhere.

How does this align with Creative Commons licensing efforts.  You can search and crawl for for CC licensed objects that you use.

Loretta Auvil:  Music Analysis.

Dynamic analysis of a Tom Lehrer file.    Very entertaining.

Scotty Logan:  IAM Services and Well Behaved Apps

If every app does its own thing, there is no real management.

Trust the container:  Identity – you can get a user name from Tomcat et al, Authentication, Authorization

Have the container provider the groups and privileges as a URI

OAuth.net – a specification developed by a group to solve the “I want my Flickr protected photos on Facebook but I don’t want to give you my Flickr username and password”.

Technorati Tags: , , ,

Advanced CAMP – Part 2

Dave Gimpl:  Computing as a Service

Infrastructure for vaporware.  They are working on the infrastructure that enables cloud-computing.

Challenges in the data center:  rising costs of the operations, the explosion of data, the difficulty of deploying new application and services, the difficulty in managing complex virtual machine systems.  When you map the business processes, they map to a variety of systems on the data center floor.

Blue Cloud is IBM’s entry in Cloud Computing.  Cloud Computing is holistic systems management.  Similar to Grid or Cluster computing.  A combination of “pervasive virtualization” for both server and storage.  Allows for virtualization across varied hardware (I think).  On demand and autonomic management and Utility Computing (Amazon’s service offering).

They gather up like systems (not necessarily identical) and manage them as a pool.  The focus changes from managing the SAN or server.  You let the “ensemble” manage itself and you manage the Virtual Image.

When the image moves to another system, does it move with state?

North Carolina State’s implementation is open source.  All of the standards are open source.  The ensembles are wrapped with SOAP/SOA interfaces.  At North Carolina State Virtual Compute Lab – a student can request a XP machine to do their project.  They get the machine in increments of 30 minutes.  They are providing service for other institutions in their area.

Ken Klingenstein mentions a paper “The Computational Data Center: The Science Cloud”

Mark Morgan:  Genesis II – Accessible, Standards Based Grid Computing

http://www.cs.virginia.edu/~vcgr

The problems:  we have target grid user that are unable or unwilling to learn new programming tools & paradigms.  Users want the benefit of the grid without having to know about the grid.

Anything you can put a service in front of and put on the internet, is part of the grid.  Telescopes, microscopes, computing power, storage, data, sensors.

Want to share this but sharing in a mutually distrustful domain.

Genesis II implements the standards that come out of the OGF (Open Grid Foundation) to test them and vet them.  Open Grid Service Architecture is part of the OGF.

Grids have been around for a long time but they are being used.  People who design grids want cool features.  User don’t care.  Genesis II is focused on the user and making grids usable.

The Specs:

  • Resource Naming Service (RNS) –  maps human-readable name to web service endpoints.  Supports Add, Remove, List.
  • ByteIO – allows you to treat grid resources like a POSIX-like file resource.
  • Basic Execution Service  (BES) – interface for starting, managing and stopping computing jobs.
  • WS-Naming – Endpoint Identifiers, Enpoint Resolution

You interact with the grid system in “file-like” ways.  Double click on a database query, drag a job onto a server resource, etc.

They use an FTP interface to manage resources on the grid.  On linux side, OGRSH acts as an intermediary between bash and the grid.  Users can do “ls”, “cat”, “cp” and OGRSH will redirect requests into the grid as appropriate.

Nigel Watling: Cloud Computing and the Internet Service Bus

http://biztalk.net

Building out a new data center in Chicago.  Microsoft is deploying 10,000 servers a month to support cloud computing.  Amazon expects their services operation to bypass the retail business soon.

Issues that come up:

  • How do I expose a service broadly?
  • How do I handle identity and access control
  • How do I interoperate?  Between vendors?  Between standards?

Connect their composite application through an ESB to the internal applications and then out to the cloud for distributed resources.

Roland Hedberg:  OM2

http://www.openmetadir.org

OM2 is about representing events and moving information about events from one place to another.  A publish-subscribe messaging system originally designed around IdM.  Implementations in Python, Java and PERL.

Three ontologies:  message, operation and object ontologies.  Message is the header like for mail.  Operation describes the actions (Miro ontology) which includes if-then-else as well as the usual add, modify, etc.  Objects describe the objects.

Messages are based on RDF/XML.  Includes support for Dynamic delegation Discovery System (DDDS, RFC 3401-3).

“Ontology Driven Application Development.”

Example applications:

Eduroam (http://www.eduroam.org) : allows you to travel between universities throughout Europe and use your local credentials to authenticate to the wireless network.

Bologna Process: supporting the movement of students between universities.  Any student should be able to go another university and take a class then come back.  Has admissions control and grade reporting.

What OM2 does:  Transport the information to the correct address at all time by the use of DDDS, by the transport protocol of the receivers choice.

Brian Busby:  ESB at UW-Madison

Talk about our use of the ESB and experience with SOA.

UW-System has been looking at SOA for years (4 or 5 years).  We got to where we were going to buy a commercial SOA suite but we passed on the purchase.  SOA went into hibernation.  Then two projects came along:

  • Course Roster Information Service
  • Course Guide

We made a decision to take advantage of a license for the Cape Clear ESB.  We can take advantage of this.

Interesting impact:  people suddenly had to change their discussion to be around services that they need not big data loads or APIs and they made the change.

Issues:

  • Right-sizing the environment – we don’t know how many people are going to be using the ESB or the load on the services.
  • ESB as a service hosting facility
  • Collaborate development teams (Integration Competency Centers)
  • What aspects of integration should the ESB handle – do you put all the business logic in the ESB, etc
  • Support of the loosely coupled environment

Organization Issues:

  • Governance
  • Ownership of the services, orchestration, operational data stores
  • Security policies
  • Web services granularity
  • Data representation – what XML should we use to represent data
  • Service Level Agreements
  • Service definition & re-use

The fact that we got the ESB in place is driving the conversations that we were having years ago forward finally.

Technorati Tags: , , , ,

Advanced CAMP – Registering, Discovering and Using Distributed Services Part1

R.L. Bob doing the introduction: 

Advanced CAMP could mean to some people the advanced topics beyond just the basics.  Bob likes to think of it as the Advance Camp out in the wilderness where you are more likely to get caught in a blizzard, get shot and generally face the wilderness.

The theme that came out was the needs around service discovery in higher education.  Discussions will cover CyberInfrastructure for Humanities, Cloud/Grid, SOA, ESB.    Discussion groups on data models, governance, service discovery and <your topic here>.

Workshop Format:  Each participant should offer (at least):  1 opinion, 1 rant, 1 hope, 1 keen observation.

The problem space:  SOA is happening across academia in variety of ways varying from Web2.0 apps, mash-ups, messaging.  It happens intra and inter-institutional.  This impacts how we offer a variety of services and raises a set of questions:

  • How should digital tools and data for scholarship be made available?
  • What metadata should be recorded about them?
  • How can metadata be globally aggregated and searched?
  • What operational and security environments should protect them and enable their appropriate use?
  • how should their semantic relationships be codified and maintained?

Mark comments:  connecting metadata to the object and having it persist and stay attached as the object moves around and is copied is a difficult area to address.

Jill:  SOA is also talked about traditional administrative system but do people think about this

Why would academics would want to store their content in a central system?  It might be about the ability to add metadata and re-use the content in multiple places.

Loretta Auvil:  SEASR

http://seasr.org/

Goal was do develop a software environment that would allow for the reuse of software components focused on data mining applications for the humanities.  Looking at text analysis and music analysis doing genre analysis, mood analysis. 

The components and descriptions of those components are very web centric based on SOA and Semantic Web.  They are talking about a Semantic Enabled SOA.  The components are written in RDF.

Looking at interesting ways of searching:  Tag Clouds, Link Flows

Working on a workbench using Google Web Toolkit.  Allows you to do a mash-up of the components into flows.

Example Applications:  MONK – it has a custom UI that calls SEASR as a service.  NEMA – music analysis service that does 10 second slices of an MP3 looks at the genre and mood. 

Steve Masover:  Project Bamboo

http://projectbamboo.uchicago.edu/
Flickr and del.icio.us tags:  projectbamboo

Asking the question:  How can we advance arts and humanities research through the development of shared technology services?

Areas of focus

Discovery and Analysis
Annotate and manage – including the idea of Folksonomic tagging with identifiable levels of authority.

Need to support serendipitous discovery.  Search is not useful if it limits serendipity and foraging.  Intellectual Property pain and accelerating interdisciplinary are motivate “commons-based peer production” (cf. Yochai Benkler) .  There is impatience with copy-write.  There is desire to support inter-scholar relationships.  Community / Networking that support a “lattice of interest”.  Legal and institutional policy are trending towards advocacy around fair use in law.

Emerging aspects of scholarly practice include: shared standards and services, social and scholarly networks, deep consortia across disciplines and national borders.  There is need for a chain-of-credibility in mash-ups.

Looking less on service/tools developments and more on standards-profiling and services to facilitate interoperability.  One area that they might focus on the sharing / tracking of reference use:  who used a resource in what context and for what purpose, who provided the resources to the commons.

We are moving from a wedding cake stack (data and repository, middleware, application on top) to a three-side figure with mash-ups and tools on edge of the triangle.

Ken K – we heard from an English scholar that he is does not do “team english.  He is a cat and he does not want to be herded”. 

There is a tension between scholars wanting to know “who is using their stuff” and but not wanting to their activities monitored.

Daniel Davis:  Fedora Commons

http://www.fedora-commons.org/

Now a 501-3c organization.  Moving from an internal grant-funded project to a community project.

Much of the work is focused on integrated services from other projects rather than re-writing code that already exists.

Splitting into multiple projects: 

  • Fedora Repository – original Fedora Project,
  • Middleware – looking at seamless integration between other groups’ services,
  • Akubra Storage – new storage plug-in architecture, transaction file system,
  • Topaz – core components for semantic-enabled apps currently publishing several journals mostly in medical research,
  • Mulgara Triplestore – highly scalabel triplestore.

Relevant technical trends:  SOA, Web2.0, RDF, OWL and OWL-S

There are two paradigms that we are dealing with:  the lightweight Web model with little trust / security and the Enterprise model where you have deep trust / security models (think HR systems).  A repository can bridge these two worlds.  You can easily repose content then add a trust  model and policy driven controls for adding scholarly information on top of the content. 

The Enterprise paradigm need to support near ACID (atomicity, consistency, isolation, and durability) semantics and a strong security and trust model. 

Question:  The idea that there is a difference between Federated Identity and Federated Repositories and how that would work.    They are different aspects but related. There are discussions about shared information between the repositories like User Accounts.  In one repository, that person might be an account.  In the other, they might be a reference.  How much do you share between the two repositories.

Jens Haeusser:  Kuali Student

http://www.kuali.org/communities/ks/index.shtml

Keys:  Modular, standards-based student system.  Community Sourced rather than open source in that their is a board who sets direction and manages the roadmap.  It is a person centric system – focused on meeting the needs of the users of the system.  SOA-based.

Traditional ERPs – you tend to implement twice.  Once, when you try to make it meet your current practices and then again when you accept the best practices as defined by the vendor.

Functional Vision:  Support the end users by anticipating their needs.  Support a wide range of learners and learning activities (traditional students but also life-long learners, distance learners, exchange students et al).  Design to make it easier to change business processes.  Reduce time staff spend on routine tasks.

Technical Vision:  SOA and Web Services.  Not delivering an application as much as they are delivering a framework for you to deploy your business processes.  Using the Web Services stack:  Standards-based, adhere to Educational Community License (ECL).  Building the system in Java.  Open Source reference Implementation.

Guiding Principles for the KS Technical Architecture as a PDF

The functional design team is gathering input from a broad range of players from both within an institution as well as between institutions.

The first thing they are working on is Learning Unit Management.  Treating it more like SKUs.  You can compose them together to make larger units.  They have learned that the current way many systems define courses isn’t very good.

Technical Recommendations as a PDF

Database:  Apache Derby
Orchestration:  Apache ServiceMix, Sun OpenESB, Kuali Enteprise Workflow (KEW)

Created a standard development environment that includes a submission environment.  Maven and Subversion, Google Web Toolkit (UI).  Business Rule Management System (BRMS) to store and search for business rules includes a UI for business users to define the rules.  Looking at the Fluid Project for support of accessibility/usability requirements.

They are using different ESB for different aspects of the framework. 

Technorati Tags: , , , , ,

ITANA Face 2 Face – Security Architecture

Indiana University

Completed a 10 year Strategic Plan which worked because they connected money to it.  You couldn’t get funding unless you showed how your project connected to one of the 71 strategic initiatives.  Completed a 10 year tactical Telecom Plan.  Instead of replacing 1/4 of the switches every year for four years, they want to replace all switches in one year so they can take advantage of new features.

802.11X access solution based on MAC addresses or logins.  Getting to automated, policy-based network access.  What is the value of this and what have people done in this area?  What are the policy zones?  This can flip it over so that we are both protecting our network from devices as well as protecting devices from our network.

This group could develop some design templates that schools could use in discussions with vendors.

UW-Madison

Should there even be a Security Architecture?  Shouldn’t security be embedded in all of the groups and users?  When Stefan started in 2001, he always was asked, “Why” about security items.  Why do I need to use a firewall?  Why should I have logging turned on?  Set a set of principles:

  • Security is Everyone’s Responsibility
  • Security is Part of the Development Life Cycle
  • Security is Asset Management (classifying the information)
  • Security is a Common Understanding

We have a five step process for doing a risk assessment.  First we agree to the assessment scope, then conduct the assessment, develop a draft report, communicate the findings then re-assess as needed.

Risk = (Impact X Likelihood) / (Mitigation Controls)

Impact is related to costs.  How do you monetize reputation?  You can ask how would you spend to prevent this from happening.  This is a Risk Prioritization process.

How do you balance the security principles against the development principles (scalability et al).

Technorati Tags: , , , , ,

ITANA Face 2 Face: Data Management

Data Management  Discussion:

Key Issues:

  • Data Architecture, Analysis and Design
  • Data Security Management  – data access and security
  • Reference and Master Data Management  – making data available rather than copying data
  • Data Warehousing and Business Intelligence Management – normalizing the data across the data warehouse
  • Document, Record and Content Management –
  • Meta Data Management –

The difference between Structured Data (data in authoritative systems, usually in a database) and Unstructured Data (  ).  The Structured Data was designed by DBA.  These can proliferate silos.  Complex queries are difficult to build and brittle.  The metadata and taxonomy as delivered is often “accepted” without thought as the enterprise definition and taxonomy.  They also include open fields to store what ever you want.

Unstructured data is individually generated, often in file systems, often without much metadata that is meaningful to enterprise.  The rich media formats cannot be easily mined to discover content.  Management is a nightmare with a proliferation of stores and types of content.

Structured Data Gaps:

Data Warehouses:  it was sold as a way to build a bridge across the silos.  The queries are difficult to construct and often take a lot of effort to get written.  It is hard to deliver the complex queries.  All the business logic is missing that is used to develop the data and queries.  There is a gap in the definitions and the data in the warehouse.  You can define student 12 ways so any query could have 12 answers.

There is no business rules repository that lets you figure out how things are defined.  You can build business rules into the database and into the application code.  The farther you get from source, the farther you get from the business rules and the definition and intent for the data.

Data Warehouse is used to buffer the source system from queries.

When we give out reporting tools to individuals in offices, then it locks you into schemas in the data warehouse.  As people develop their queries, it locks down the database table structure.  If you change the schema to make more enterprise sense, then many distributed queries suddenly break.  There are also “experts” who are vested in their interests in the complexity of the data warehouse.  When you streamline and change the process and the queries, you actually threaten the experts.

LDAP as an example:  We bring data from a bunch of sources, we then normalize the data and present it in standard queries for consumption at large.

A place to start:  things that go into an executive dashboard.

Access To Data project that turned into a drive to get large data sets into Excel on the desktop so they could drill around on their own.

Privilege Management: Authorization in application based on name NOT on an institution role.

At UW-Madison, we manage privileges by sneaker-net.  We don’t have access to metadata so that we can generate privileges based on roles.  We don’t have a way to delete someone from all of the systems when they leave or change roles.  The roles of people have states that we have to move them through.

There are multiple organization charts that come into play when you try to define the role(s) the person which can actually be different at the application roles.  Every application also has roles defined and applications do RBAC.  But there needs to be an external system where you manage these people and roles.  There are two views:  one is that there has to be application centric views of roles and privileges, the second is that there could be a set of pre-defined roles that come with a suite of privileges. 

There are a set of RULES which are different than the roles.  The rules must be stored in a repository as well. 

Unstructured Data Gaps:

Electronically recorded lectures, talks etc: We gather some metadata when we create the file like it is the third lecture, created on this date, etc.  We cannot scan these files to get rich metadata.

Unstructured Data Management Architecture from IBM.  It is cycle-intensive.  It looks at 10 second clips of music and adds metadata (like it is “happy music”).    The idea that you can just grind at the problem with power might work for a while.  There are vendor(s) who are working in this spaces.

Just knowing what data exists is an important step.  Storage is just as important.  How long do you archive, repose the data?  At what level of storage should you storage?  The librarians are building dark archives.  They are storing data in hopes that some day we will be able to “do something with it”.  The metadata harvesting and management tools are immature. 

Digitally Signatures:  When we throw stuff out onto the web or into distributed storage, how do we mark the content so we can mine the archives.  “If there was a point to doing it, people might do it.”  Not many people see the value in deploying the systems.

Wikipedia claims that authors are professors who aren’t so their stuff will be taken more seriously.  The ability to express our university membership out in the world at large becomes more important.

Students will be coming to us with digital identities.  They will want to use those identities and we will become another fob on their keychain that they use in the world at large.  We may not be the source of their identities in the future.

All of the data is going to live someplace.  We will not be holding it all but we will need to be able to assert our IP over the data wherever it lives.  Look at the RIAA and their ability to enforce their IP across multiple platforms.

Standardized media formats:  

E-discovery:   When you have an E-Discovery request, it is no longer personal data or institutional data.  What is the impact of distributed storage and the Web2.0 applications on e-discovery requests.    Where is the liability?  Who will be sued?  Don’t change data management practices to because of e-discovery.

Technorati Tags: , , , , , ,