Andy Johnson. I work with the technical team on ADL. I’m a Problem Solutions contractor for ADL,
not a government employee. Just a few housekeeping things. This presentation will run about forty five
minutes and we’ll leave about fifteen minutes for for questions and answers at the end. Feel free to free to ask questions throughout, though in the panes if it’s an individualized response I’ve got
Jonathan Poltrack and Nikolaus Hruska who are also technical team members and assisting
in this webinar. They will either respond to you in time or they will save your question until the
end and I will address it if it’s a question that’s probably more applicable across the board
to everybody that’s listening. All the resources you’ll see here
including the slides will be available online. There’s going to be a single URL that we will provide at the end that you can use to get all the
resources from this webinar. Don’t feel like you have to scramble writing down links
or anything else as we go through. So with that we will get started. Basically where we’re starting from is SCORM.
I mean we’re ADL, we’ve done SCORM and SCORM is a huge success. There are pages of companies who have adopted SCORM
and its the de facto global learning standard. It’s used at the department of defense.
Its used in academia industry. It does what it does very well, is not going anywhere
and it will always be good for that learning management system to single user
interaction between content and user and allow tracking of things that a learner does within a
traditional learning management system. However, through project feedback and
helpdesk tickets as well as partner meetings we’ve also had specific
requirements gathering projects where we’ve gone out, set up forums, actually gotten
specific projects to get efforts to know what people really want to do coming from a SCORM background
and moving towards. We found that there were a
variety of needs that people were not having met
by a scorm and some of those are listed here. Basically they’ve all gone to tracking
diverse user learning experiences. Moving beyond that single learner model if you
wanted to do anything in SCORM with multiple learners in the same course or content there
just really wasn’t a way to do that. SCORM came out in early 2000 and since then we’ve
had a great deal of technology advances. So there are a lot of out of date practices that are
still in SCORM. Content sequencing was something we really did
do well in SCORM. In terms of making it simple enough for
people to use and even necessarily hitting all of
the complex use cases we saw that there needs to be something that is improved
or eliminated as we move forward. We need to do a better job including tools,
guides and best practices We do have a variety of SCORM resources available now
but at the time SCORM came out they were very limited in use. We need to provide clear instructions and more
efficient testing. Some of you may or may not have been aware that
third-grade areas of SCORM. When it comes down to testing
content versus system there are still some things that can escape one system and not another and you’ll
see different behaviors of the two. Finally and probably the most important we need to
find a way to expose user data. If there’s one common complaint of SCORM,
it’s that there was not really a concrete way in there to say “Look if your are
tracking all of this data, here’s how you expose it”. There is no requirement by the
learning management system even if it is SCORM conformant to provide a way to let the user
or content administrator or whatever role you’re talking about actually get out the data. So from all these requirements we came up with what we called
next generation’s SCORM and that’s a tagline we used for a while. It’s something bigger and better than
just SCORM. We wanted to be able to access the content from any device. We wanted just-in-time,
just-for-you being able to learn from intelligent tutors or incorporate real world people with mentors and peers
through social networks to discover learning yourself. Different things that might be available online in learn using a variety of different mediums:
games, virtual worlds and intelligent content. This slide really shows that environment and the other half
that we are going to end up getting to as a part of the TLA is that standardization of some of the implementation details
of what nextgen SCORM became. What it became we decided that it really wasn’t so much a
SCORM thing. We don’t need an evolution of SCORM. SCORM 1.2 was very well adopted.
And then in versioning we fixed the bugs that were in it And we attempted to add functionality in sequencing and
through adding another book to SCORM – a sequencing and navigation book. This is a part of SCORM 2004, the versioning and bug squashing
effort and adding different functionality really needed to be separate efforts because sequencing and navigation drastically
changed SCORM. Where if somebody wanted to be up-to-date and use up-to-date
tools and up-to-date conformance testing they now have to adopt sequencing as if they are
learning management system provider. Even the content changed slightly in requirements.
You could not use the simplest form 1.2 package in a 2004 LMS even if you didn’t want to do any
sequencing at all. So we need this form capability but in a more flexible
and current capacity. That’s what moving beyond SCORM is really going to be.
It’s not so much the next generation SCORM. It going to be the training and learning architecture and that training and learning architecture eventually
going to be able to do everything SCORM can do and more. Now training and learning adoption is going to be ala carte. You can pick what you want to do. This doesn’t mean that the parts
are completely separate and have no interactions between them, but, we’re not going to have a scenario like in SCORM where if you
don’t want to do sequencing you still have to adopt sequencing to support sequencing or you can’t get any certification.
We’re going to make it that different pieces can individually be chosen and eventually conform.
We don’t want to make you swallow that big horse pill just to hop onboard TLA. Especially because TLA is going to be larger
and these are the four basic areas of the TLA: experience tracking,
content brokering, learner profiles and complex networks. One thing that is very important as we are going forward
is that government itself is actually tackling a twenty first century digital approach.
We want to get better, faster and cheaper. We want to get better in that we want to learn
lessons from others that are in the field. We want to streamline the process for content and system creation. As you see the goals here, from President Obama we want to
have technology make a difference in people’s lives and ultimately government. We want to do that from a shared
platform approach, offering digital services and managing data in acceleration of new technologies faster through distribution
of works and sharing of content, experts and reusable parts. We can develop content prototypes cheaper and
find out what’s really working and what isn’t and finally cheaper. There’s a lot of passionate people out
with time to spare. There’s a lot of people that might be in one job and are able to do that job but have a little
extra interest in another project. We can get smart people across government, across industry academia working on
projects together. Quite frankly the more eyes that are able to sit in these projects
and these web services the better. It’s going to be for developing true contents and true capabilities for the twenty-first
century. So I’ve talked a little bit about web services and
that’s really what we’re looking to do here is moving. We are in the twenty-first century mode of learning.
Things are ala carte. You’re not necessarily going to buy them all on
one system. You might buy a system for eighty percent capabilities
but expect to get a twenty-percent somewhere else. A few of these are very similar to the SCORM abilities but
and building non proprietary solutions through standard based communication and together
standardized communication. It doesn’t have to be in a single language anymore. That’s what
interoperability means. Not just saying that everybody is going
to use the same language. We are going to support all the languages. Usability of reusability
pretty much go hand-in-hand with the idea that we’re not going to reinvent things and we’re going to
design common components so that code doesn’t have to be duplicated. Finally, deployability where we are leveraging standard internet
technology is not necessarily reliant on the old ones and we’re going to do things in an open source way. Every single TLA
component and the processes involved are going to be open source. The technical specifications got us a sense of sameness but we also
want customized ability without losing accountability. Those are two opposite ends of the specter, where open-source allows
you to completely customize ability and the traditional managing way where everything inside was complete
accountability. So the areas we want to hit here are security and quality, again more eyes better to catch mistakes.
We are going to get more people testing it. Freedom, customizability and customized updates can be
done without relying on the original person that developed it or the original vendor that developed it.
Auditablity, that’s where we can see who actually created a certain piece of code and get back to them.
The overall cost of open-source you’re going to need to have some sort of expert on hand that you have to pay;
but, over all you’re not paying licensing costs. And finally the support options are just
fantastic because there are people out there that just want to help each other and share expertise. That can be
really hard to do if you’re talking about a closed installation. Sometimes in those cases sharing information is not something
that’s easy to do or even find other people that are implementing the TLA component lifecycle.
These are all basically how something flows through the entire process of becoming a member of the TLA training
learning architecture. All these processes are either open or open source.
We start with the BAA which is a broad area announcement of ADL investigation. This is essentially finding an area of
interest and working with one specific group or internally to flush out an idea. That process is completely open.
Anyone can apply for those broad area announcements and contracts when they come up. Through that eventually
a web service prototype comes to us at the end. In terms of a deliverable we are always going to want some sort
of tangible thing rather than just simply research. That prototype will become a part of what we put out there
on the web and invite community participation in. Eventually what we do is, we select the ones that make sense
and feel are mature enough. We build those into a ADL community project and ADL leads a
group of people. Again this is completely open.
Whoever wants to participate can in really creating something out there that can be used by the entire community.
And finally because we are a scientific organization that focuses on research and development,
we’re not in the business of keeping and maintaining things we have SCORM. We are still sometime’s looking to transition
it but sometimes the writings on the wall. We kind of have it, but from all these TLA components were not
going to be able to give those all the same kind of support we could SCORM. Simply because we don’t have enough power to do it.
We are looking to transition those other open source bodies to hold onto those specifications
and keep the community engaged with them. Really the whole process gives us the best of what’s out there because
people who are passionate are going to want to work with us. As we go forward on these different TLA projects I’ll point out
in each, where in the life-cycle each of those various components are. So lets talk about experience tracking. I think of the
whole TLA as some sort of plumbing architecture. Experience tracking is figuring out what is going through those tubes.
Is it water? Is it air? Is it oil? It’s really tough to build an entire architecture,
if you don’t even know what’s going be flowing through. It’s kind of a funny thing. Going back to SCORM. We’ve always been able to track all these various learner experiences.
If you can remember the old simulations, things that are built and are SCORM compliant. There’s a lot going on in there.
There’s the movement, not necessary just the mouse movement, but things that are selected in the simulation. If you’re in a multiplayer
online game, where you are actually going or the responses you have. If an interaction presents itself or a problem presents itself
what’s your response to it? What do you decide if you are asked a question? Even the delays that we have is an experience.
If it takes you a certain amount of time to respond, what does that mean? All that stuff is trackable questions. Even internet searches
you might want to do while picking the learning content. We have this wide pipe of learning information that’s available, but
unfortunately with traditional tracking we filter all that stuff out. It comes out as water or steam or whatever and we end up tracking
just the score or just the completion and we’ve lost all of that other information. At the time it was important because there was not the
throughput in the resources that are available today. Data is where it is now because
technology can support it. It’s time to take off that filter and get to what we’re calling the experience API With TLA experience
tracking the main part of that is this experience API or experience application programming interface. That can enable web service services and systems to share data to
share this comment format. Share the water, if you will. It can share interaction performance
data by agreeing on a way to expose this data, which is statements. We can enable other systems and capabilities to do creative things
with that data. Now with what you’ll see here requirements are essentially what I talked about just now is that this is what we’re
looking to do that. The services and specifications are things that we eventually think
will become a technical specification. The experience API is one of those We’re currently in the ADL
community management phase of that project and are looking to transition that to a technical specification eventually. We’re actually looking
to get a one-dot-oh (1.0) draft to that specification out by April 26 of this year. It’s probably going to take some time
to get in the hands of the standard body. That’s the end goal. Similarly for each of these capabilities were going to have
open-source software that ADL puts out on Github as a means of open source of exposing to the community all this stuff we are working on. You guys can take it, download it,
play with it, use it. You’ve heard of this particular project. We’ve got the open
source learning record store and the API experience examples and reasonable code libraries. Now I want to briefly talk
about what this LRS (Learning Record Store) is. We’ve talked a little bit about what’s going to be going through those pipes.
But first let’s talk about the learning record store. To go back to the plumbing analogy. The learning record source is essentially that tank.
Whether it’s holding water or air. That’s going to sit there and basically collect the streams of information
that other components are eventually going to draw off of. So it is basically a storage capacity. If you look at what traditional
learning management system’s (LMS) have done. They’ve done everything here. They’ve managed users. They have some of
its content. They do sequencing delivery. They do books and all kinds of stuff. But really learning record store
just does learning records. Its using the web service approach means that the learning record store
is just learning records. It can exist as a part of an LMS. As we saw in the previous slide it was
one member of many webservices integrated into that one LMS. But having it focus on one thing means that more systems can
integrate with it and because of that the barrier to entry is less to get involved with using experience API. All you have to do is create
a Learning Record Store that can interface with it and build whatever other service you want after that to interface with them both. Now the experience
API as we talk about, essentially are activity streams. What I mean by that, activity streams are used by social media to say things like
“I did this” or “took certification course on cpr” and you passed that certification course. Basically they’re unique because they read as a sentence or statements. That’s what
call them in that specification. But it’s both human readable and machine readable. So whether you’re counting on a
web service to take apart those various statements and aggregate them or sort them in a meaningful way or you’re just a person reading it. If you ever play an
online game. A lot of time you’ll see these streams go by where different words are happening. The system is clearly calculating what’s actually going on
behind the scenes as well, it’s just displaying it in a meaningful way for human eyes as well. Experience API enables various parts of tracking where as SCORM essentially allows just that
single usually PC, mac computer or anything with a web browser to track it. That’s the limitation that it had. But now experience API can allow mobile augmented
reality, massively multiplayer online games, simulations. All those can send out activity streams to this learning record store. It can all be trapped
and then the learning record store can act on it. Well it doesn’t really act, it serves as
the box that everything can pull out of it. But now that all that information is stored in one place that’s centrally located not
tied behind a firewall or necessarily needing a great deal of integration to get at it. The various web components can pull from it. We can have things like
assessment systems. hr systems, learning book applications, statistical aps and surveying tools that can grab from this learning record store
and do meaningful things with it. The concept of an offline learning record store is even viable because with an experience
API the tracking is assumed to just pull one way from that experience API to the LRS. After that the LRS and other components have to work it out. So you can theoretically track offline much easier than you could with SCORM
which relied on two way communication. Now this is a pretty common question that we keep getting asked.
Is the experience API placement personal? Can it replace what SCORMS doing? Should I stop doing SCORM or
should I do experience API? Experience API is not a replacement for SCORM because it’s only a data protocol transfer.
This is just one piece of what SCORM can do. This way SCORM is actually superior to experience API in that it’s a wider specification that makes more
behavior across it common. Where as experience API is just a means of tracking. What’s great about both of these. They can actually be used simultaneously.
Neither format actually restricts the other from tracking anything. SCORM just has many more specifics to it that makes it operates in a very specific
environment. Whereas experience API can go across many platforms. But as I said earlier the training learning architecture eventually, when its mature, will be able to do what SCORM can do and more. We are just not even close to that yet. Even the most mature component, the experience API
is still only in that third stage and just barely coming out. So imagine an entire forest of plumbing architecture. Things still need to be created
before we can do what SCORM does. However, we recognize that a lot of people and organizations are going
to be starting from SCORM. We’re going to make sure each component has a dedicated plan from a
transition of SCORM to TLA. We don’t think it should be easier for somebody new to come in and implement
each TLA component. We really want make sure that if you’re starting from a area where you’re
good at developing online learning content, you understand things like SCORM that transitioning to the TLA is going to be easy. We are going to provide
best practices for updating content systems. We are going to provide software libraries and wrappers for free on our
open source Github site that’s going to be able to support it. So the next piece I want to talk about is the content brokering part of the TLA. If you think of experience tracking as the runtime part of what’s going on, then
the content brokering piece is that sequencing in navigation. So its really important for us to get that one right after making the previous
sequencing navigation attempt much harder to do. So the simple approach to this is basically this.
What is the next logical activity? What the next piece of content that needs to be launched? It should be just in time. We need to identify
though the user’s current experience or through whatever gap analysis we can perform what content should be
displayed, downloaded or launched. Whatever you want to call it. We need the right content at the right time. We want to take into
account the available content that’s out there, so we’ve got a lot of different pieces to this content brokering
solution. You’ll notice that under specification and services, we only have the
3D repository. Really a lot of these pieces end up playing together in a very unique way,
in that we have a roadmap to how to get the content brokering. We’ve broken it down to basically eight areas of interest. Three things that
ADL is doing to advance the capability. Now when we talk about delivery we are talking about the actual launch including authentication authorization.
When we talk about storage we are including retrieval. Federation refers to the idea that there will be multiple sources of the content.
But we want to give everything the same universal search. Adoption, how do we get new content providers into an existing system.
Policy, how do different organizational goals within a single federation coexist off of these managed peoples
organizations individual expectations. social, the idea of having ratings and recommendations as a part of
determining what the next logical activity is. Metadata, how do we describe learning content. Paradata, which is usage data
and derived data from the actual content itself. So we can see what are people searching for. What are people doing? How is it,
that if someone is successful at what you’re looking at what else can they do? To go across this road map, the first thing we need to do is standardize the data. What are the common language components we’re looking at doing here?
For metadata that’s pretty minimal from SCORM. Use of long metadata, that’s standardize the data part of it. And then we have to standardize the practice of it and that’s simply to
create policy and rules around each of these. So to go back to SCORM. At one point SCORM had to pull profile from metadata. Where it told you what you needed to do. Experience API is going to have things like
like that as well. Where there is going to be a ADL profile that exists that attempts to
help people to do these best practices in a standardized way. And finally for content brokering, the end game is to eventually deliver that
next logical piece of content. How do each of these eight specific areas map to what is that next logical
piece of content? How does that end up in some giant equation or some personal system for
learning or some intelligent tutor that gives me what I need next. Now each of these components i’m going to talk about next fills in a part of
that road map. These first two systems were developed not with the intent of becoming a part of
a training and learning architecture but with the intent of solving a real world problem that we were aware of.
We can leverage the lessons learned from that. so the ADL 3D Repository is already in use. There are many people that have
spent a lot of money developing 3D graphics for use in virtual worlds. When you do that you have to pick a platform that’s going to be deployed on.
You see some of those on the right side here. But what ends up happening is on two different contracts or even
sometimes the same contract pieces are built for these different virtual world environments or virtual world builders that have different
requirements for how you actually build the graphic itself. So money ends up getting wasted and it’s spent multiple times for essentially
the exact same graphic. So what this project can do is, it’s got a variety of components to it.
Now the proof of content, this is an open source repository, and what we’ve gotten is a lot of lessons in tagging, organizing and
showing the content. But most important for the 3DR is that federation API. Where the 3DR is really good at taking a lot of different sources of content. Enabling a single search that will actually find them all in a meaningful way. It involves taking features of social media and a great deal of metadata for
describing individual assets. The reason for paradata here for practices, has a punch in that roadmap, is because it can actually derive data from the
graphic itself. It can derive conversion of strategies and the polygon counts simply
from updated or uploaded content. So we think that these particular areas of the TLA that the 3DR can really give
us some valuable lessons learned and some footing as we go forward. The learning registry is another one of these systems that is already created
and its purpose is to essentially show other people who are looking for learning content what’s already out there. And if you have learning content, how do you go about sharing it? This is essentially another member of the prototype chain that we’ve gone down. If you remember CORDRA, that was really the first one of these that existed. This is probably the third one down, where we’re actually working on getting people
together to register content. That process was painful at first but eventually what we had is a web service
that connects the repositories together. So you’ll see that here that adoption of policy. Despite multiple organizations having different policies we’re able to get them in here
and onboard them. In this case, storage is more retrieval.
How a common way to search for it and a common metadata cross them. So we can search for large amounts of content that’s been
implemented by organizations and built by organizations. The final piece of the content brokering is RUSSEL.
RUSSEL is actually in the BAA stage right now. It’s going to be a prototype shortly.
We’re expecting the deliverable at the end of March. RUSSEL has a lot of different pieces to it.
It’s an out of box content repository and digital library. It has different ways of managing that repository. It has integration areas with the actual content.
So you see the workflow here. It has real ISD parts to it where you can design content
and do things like instructional strategies. You can then go out and through another federation API you can find
other repositories and content or content within your own repository
and add to what you’ve already got. You can get that to the developer who can identify objectives and strategies
and through the metadata provided within the RUSSEL framework it’s there. You start your process cycle of actually developing of content.
you develop it, review it and revise it. Eventually you can upload it to RUSSEL and publish metadata and
paradata around that sharing and how it’s been used within your repository and other repositories. So as you see here on the RUSSEL Features, these are some of the areas
were hoping this can tackle in that it allows creation of you own repository and federation so you’ve
got the storage partner for repository and you federate across multiple repositories. The social aspect in that you can use ratings and recommendations within
the RUSSEL framework. Metadata, actually goes all across the board in that the metadata that is
allowed within RUSSEL actually gets us to the next part of the TLA that I’m going to talk about which is learner profile. And finally the paradata it supports. The usage data of
of content that exists in that repository. The next piece of the TLA is learner profiles. Really this is talking about storage and retrieval of learner information rather
than content information. How it exists across different platforms and systems. This is a big one
because it’s typically been a hard problem to share different aliases, if you will, online. Lets say you register for some learning content as a student
and you’re doing it to get your MBA. That’s a separate process from the actual work training you might be taking as well. You might have two different aliases. You may have two different email
addresses associated with those different learning things going on. There needs to be a way that the learner themselves can manage those
profile rather than a single proprietary system. Going across that. We’ve got to be able to compile data about the learner
based on the course completions, competencies preferences, learning styles and what they’ve done.
You’ll see this with gaming, with badges and achievements. Where they’ll be associated with the learner profile or a backpack
as we’ve seen with open badges. Essentially you can collect these things as experiences of what you’ve done,
which of course supports the experience tracking portion of the TLA. The idea here is that the learner profiles well support content brokering
to help us what’s that next logical piece of content. But we also need to use it to report meaningful things
about learners in general, not just what the next logical piece is. This project or capability is really in it’s infancy. We just kicked off the first
BAA effort of the adaptive navigation support and open social learner model for PAL. I’ll get to what the PAL is. It’s really in it’s infancy and we’ll see where
it takes us. The final component is the Competency Networks. Ther’s really a logical mapping of competencies and learner profiles, in that we can’t
just handout badges and achievements because they have to mean something. In order to mean something, how you actually got that achievement or
bad or who gave it to you, is just as important as what it stands for. These Competency Networks need to exist for this to be possible. The way this
works is that we need peers, provider systems and mentors all connected in a way and agreeing on a common framework
to describe the competency relationships. Both from how do different people who assign competencies or organizations who
assign competencies exist together and how do different competencies exist? Essentially asking the question. What does it mean to pass algebra?
How does one pass algebra? What does that mean? Your university might not accept the algebra passing of another university and why
is that? We need to get people and organizations on the same common framework and the same competency network. I want to provide just a little background on the PAL and how that relates
to training learning architecture. Those of you who follow ADL know that our future big vision out there is ten
to fifteen years out. Is the idea of this personal assistant for learning. Basically it’s this technology that’s going be unobtrusive, intelligent and
ubiquitous. It’s going to be something everybody can use and be personalized to you. It’s going be transparent so you’re not carrying around something bulky.
It’s going to be online all the time. It’s going to support a networking to an entire network of peers and mentors
sharing information and reports using some sort of artificial intelligence to determine what that next logical piece of content is. And of course using the most up-to-date dynamic learner model and
findings from the fields of cognition and such. To describe the relationship between the two. The PAL is a holistic solution.
It is taking into account that all the components are designed. All of these various systems are online with populated data. The reality is that these are not designed yet.
So the TLA is going to pave the way for this PAL to exist. through different research of these programs. The TLA will work on some of
them or recognize some of them. They are not going to specifically build them. They will recognize what systems should exist, but they are not going to
specifically build them. Where the PAL is actually going to end up using these systems together and recognizing those relationships to build something that
works to meet its goals. As I said, we are an open source and we really want people to get involved
with these various capabilities that we’re doing. So the TLA is one of those, it’s the larger architecture.
We’re always looking for people to contribute. You can download our information. Again, at the end there’s going to be a resource slide that will link to all of this.
So don’t scramble to write these down. This address is going to have everything. Experience API, we’re in the final stages of the community management now but
we certainly are going to need more eyes on the spec itself. We’ve had a nice core group of people that have worked with us.
Sometimes you might need a gut check with some fresh eyes to say “Hey you guys are doing this wrong” and “We know you’ve made decisions
but here’s what we need”. We are always looking for used cases. Whatever level it is, if you’re just interested how it’s going
because you’re organization is going to be implementing it. Fantastic. We could use you.
If you’re going to adopt it and you know you’re going to adopt it and you want start working on it right away We’ve got
ways for you to get involved there. If you want to actually help us write the technical specification,
we always can use more people for that effort. Here are the resources that are going be out there.
These will all be on the next link that I’ll provide. Basically the next generation SCORM requirements. How we got to deriving community contents and requirements,
the training learner architecture itself, experience API and SCORM. If you’re still interested in that of course the ADL Github is probably the
most important because it has all of our open-source tools and demos there and a tech team blog if you are interested. Some of the exploratory writing that we’ve done around some of these projects.
And that’s it. I’m going to let Jonathan Poltrack and Nik Hruska feed me questions now. See that link on the bottom for our webinar resources. Write down
that URL. If you’re going to write one down, that’s going to have this presentation there as well as all of the research talked about. John and Nik I’m unmuting you so you can answer questions.>>Nik: Can you hear me okay, Andy?
>>Andy, Yes i can.>>Nik: Okay very good. I’m going to give two quick clarifications
and then I have five questions that I think are broadly applicable. There are some other questions coming in but first a quick comment on the BAA.
There are some restrictions to the types of companies that can respond to a BAA. Instead of going into all the details here, check out the BAA on our web site
and the contracting office will provide all the types of details that is required to look into it. When we were talking about anybody can really apply, what we mean is any size company.
We’re really looking for ideas from the largest corporation, the smallest corporations, people with one man shops in their garages. There are some restrictions to the types of companies that can actually respond
with the white paper, so check that out on our website. Second, I know Andy talked a little bit about this and I think it was a little early on
when we created the slides and we didn’t have this detail. We do have a project now under the learner profile category in the labs
and as we move toward what that we’ll put out more information on the website so that it’s available and you can see what
we’re doing there. Ok. So now to the questions Andy. There were several, so I’m going to actually combine the
ones that actually asked the same question in a kind of a different way. I’ll try to combine them into a single question in a couple cases.
So first, when specifically talking about an LRS. The beginning of that learning record; in the past, when we’ve said learning
records we’ve really talked about transcript data In the case of the learning record store, we’re talking about something a little bit
different and a little bit broader. I know Andy provided some examples inline but can you maybe talk just a
little bit about what you mean by a learning record in the context of a learning record store.>>Andy: Sure and yes that’s very true.
In the past, especially under SCORM, when we talk about learning record we’re talking about scores and completion, passing and failing and content and objectives. But now we can really track anything and everything that’s tracked
is a learning record. So, if you’re in a simulation where you’re operating a tank.
You could track; how many times you fire, every individual fire, every individual turning of the gun, every individual operation of any control within that tank. You could even track how you react or how to talk to your officer
while you’re performing the simulation. Really it’s anything you want to do. Anything you can fit into
an action and an activity, you can do it. If you think of playing a video game. You could break it down in the context of your pushing a button and you
could break it down into the area where you’re actually talking about whats
happening on the screen. So if I’m playing nintendo, I could say Andy pushed A as a learning record.
But I could also say Mario jumped as a learning record. It’s completely wide open.>>Nik: Thanks Andy.
Another question came in. We talked about the experience API. We mentioned the activity stream technology that’s a bit different.
Some people had questions on, if there are some major distinctions. If it’s the exact same thing as an activity stream the experience API and
if they were going to update something that supported activity streams to support the experience API. How different are they?>>Andy: That’s a really good question and we’ve actually had a lot of discussion
in the experience API group around this. Essentially right now the two are just two different.
We feel that there are a lot more requirements that the experience API should be able to use
surrounding groups of learners and identifying groups of learners and different memberships that activity
streams really just don’t have. Again, based on a single learner type I should say, single learner experience
type of model. So we’re hoping that we’re going to remain separate from that
effort. Even though there are a lot of similarities between the two. Because we just feel that the community requirements are too strong to conform to them. We’re hoping with enough adoption that they will either have a profile that
conforms to us or adopts the direction we’re taking in especially in enabling multiple groups and multiple learners to us
be part of a same activity.>>Nik: Great. Thank you Andy.
Okay, our next question and there’s a few people who asked a couple different parts. I’ll try to sum it up into maybe two or three parts. The first part. Will the TLA do everything that
is currently supported in SCORM or web based content. I’m kind of following up with that. We all have an option to use TLA or SCORM.
So will the TLA support a similar set of functionality to SCORM?>>Andy: Yes the TLA will be able to support everything that
SCORM currently does. Will it do it in the exact same way? I doubt it. But every single capability is going
to be there. To me it doesn’t make sense that if we look at SCORM can do to not include parts of that in the TLA. We briefly talked about two of the major components of the runtime environment being the
experience API and the content brokering essentially being the sequencing
and navigation. There’s nothing that’s going to preclude using both of them together.
There’s really not going to be anything that’s so demanding of anything in the TLA. You know, we learned our lessons from making things really really strict
and it’s not going to happen again. So, the SCORM is always going to be the stricter of the two and the TLA is going to
be built that it can allow SCORM things to happen. It’s certainly isn’t going to be dependent on SCORM things happening and it’s going to
work in environments where SCORM might not be possible.>>Nik: Great Andy. I’ll follow-up to that and I think this may be that last one.
So if SCORM still exists, knowing the challenges for sequencing and using mobile devices. Will there be an update to SCORM for the things or will those new types of
functionality just be handled in the TLA?>>Andy: There basically going to have to be handled in the TLA.
A caveat to that is that a couple years ago we put out a demonstration of actually doing SCORM in environments that
people really didn’t think typically to use SCORM in. So we’re always willing to work with people, especially DoD partners
who might have a requirement to use SCORM in a non traditional way. I can just tell you that from an implementation stand point that
if you are dependent on using mobile or simulations or augmented reality that moving to the TLA is just going to be much easier in the long run to
adopt those types of things. In SCORM itself, if you can get around the technical requirements of you being able
are eventually going to force out a lot of those mediums. It’s just not going to be possible.>>Nik: Okay Andy. I think that’s the last one.
The other ones were addressed in line. Let me double check to make sure we didn’t have any more come in.
No thats it. You’ve addressed them all.>>Andy: Ok. thank everyone for showing up.
Thank you John and Nik for handling those questions that came in and for feeding me some at the end.
We will have more webinars online every month so please stop back. Thank You.