What is blocking VR adoption? – Reality Strikes

From BP to KFC, huge companies all across the world are turning to virtual reality (VR) to train their employees. There’s no doubt that VR technology can take boring, traditional training and turn it into an immersive, hands-on experience. So why haven’t more eLearning companies jumped on the bandwagon?

It’s clear that VR is more than just a passing fad. Now that products like Google Cardboard make a VR headset that’s affordable and compatible with your Smartphone, VR is more accessible than ever. But most of the best eLearning companies still favor interactive learning modules and tutorials over VR integration when it comes to custom eLearning design.

Is VR too distracting to integrate into online learning or is there another obstacle preventing it from being used more commonly? Let’s break down the pros and cons of using VR in eLearning and blended teaching.

Benefits of Using VR in eLearning 

1. Create an Immersive Learning Environment

This is the biggest draw of VR for educational purposes, it allows the learner to interact with new information in an immersive environment.

Google expeditions and virtual field trips can be great substitutes to exploring a new region or landmark when the person isn’t able to go there in person. Other companies offer VR environments designed to help improve soft skills like public speaking and interviewing.

Some of the most cutting edge eLearning companies have even created VR training for healthcare companies and the military. In this kind of high stakes environment, trainees are able to put their learning to the test in a realistic environment before they have to perform in their field, where making a mistake can have much bigger consequences.

2. Sparks Interest

Let’s face it, there’s something about those VR headsets that inspires excitement. When an audience is interested in a topic or training, they’re going to be more engaged and chances are the learning will be much more effective. Since creating effective learning is the goal of every instructional design company, this is hugely beneficial.

3. Active Learning Promotes Deeper Learning

A visually interesting, interactive learning environment where trainees can apply their knowledge is as close to on-the-job training as you can get. For many online learners, internships or cohorts aren’t an option, and VR fills that gap.

There’s a reason health professionals, teachers, military personnel and so many more professionals are required to get hands-on training before becoming fully trained: learning by doing is the best way to master a skill. Since the ultimate goal of eLearning companies and softwares is to design high quality, effective training, VR seems like a great way to get results.

Drawbacks of using VR in eLearning

1. Cost

This is probably the biggest obstacle when it comes to using VR to enhance training solutions- it’s just too expensive. Even though Smartphones have made VR more accessible to the masses, the capabilities required by many training courses would reach beyond what a learner can do at home with their phone.

2. Implementation

The reason most companies choose eLearning solutions for their training needs is because they want to streamline effective practices for employees who are often in different locations. VR has limitations when it comes to online learning vs. face to face training since very few people can be trained at once.

Additionally, replicating the training across multiple locations would only be possible if all of those locations have the same VR capabilities with trained instructors (which brings us back to high cost).

3. Distractibility

New technology can sometimes be distracting. A trainee must first learn how to use the VR headset before he or she can complete their actual training which takes time.

Not to mention, our instinct when it comes to new tech is to play around with it. If you have a strict timeline that doesn’t allow for much exploration of the new tool, VR might not be right for you.

The Verdict

Is VR just a distracting new trend ..how much value does it add…what is the ROI..Do the ends justify the means? In the end, it depends. Every training scenario is different and needs to cater to a different audience. We think some audiences would not only benefit but thrive with the addition of VR. We are for example developing VR for the Power Industry , where actions like Pole climbing, derricking is better experienced through immersive technology, rather than a dangerous outdoor lesson. And such training requires additional insurance requirements for the risk.

On the other hand, not every training needs VR. The decision to include VR in your company’s training program would have to come with the support of upper management. There needs to a vision on how VR will integrate into your overall Training needs and scope.

If your vision for training using VR technology does not budget for the approvals on time ,effort and money to making it happen, it’s probably smarter to stick with a training method that is less complex.

Source: https://www.ideaoninc.com/what-is-blocking-vr-adoption-reality-strikes/

The post What is blocking VR adoption? – Reality Strikes appeared first on eLearning.

Adding Narration to Ready-to-Go (QSP) slides


In some previous blogs I have warned to be careful with using the Switch to Destination Theme feature, have demonstrated an alternate but safer workflow to embed ready-to-go slides in a course with a custom theme. Because the themes in the responsive and non-responsive Quick Start Projects are not identical, I added a third post explaining the possible issues in the responsive version. That last post offers a free table with the names of all the master slides (including the most recent Alliance project).  It is an important resource if you want to avoid issues when embedding a ready-to-go slide. Issues occur when your present custom theme and the theme of the embedded slide have identically named master slides.

Only two QSP projects have audio in the Library: Rhapsody and Wired. The clips are used on the Podcast slide, triggered with the Play Audio command. As you all know, adding good Narration (VO) to an eLearning course will enhance the efficiency of that course considerably. Designers of the QSP’s have taken care of an easy replacement of graphical assets for you, but if you want to add narration or other audio to the course, you get no help.  This post will try to help you with some tips. It will not handle the creation of the audio clips, whether you use the TTS feature or a real audio recording. Only tips to avoid frustration when adding audio to the slides. Since the Closed Captioning feature of Captivate is limited to slide audio, the post will not explain how to add other types of audio. At the end you’ll find some more links which could help you in that case.

Adding audio clip

My usual workflow for inserting an audio clip as slide audio is:

  • Import the audio clips to the project Library
  • Drag the audio clip to the timeline of the slide (or to the stage, be careful not to drag to an object on the stage). Originally the slide timeline on most QSP slides is 3 seconds. If the audio clip is longer, a dialog box will appear. Choose the first option to extend the slide timeline.
  • Audio timeline appears below the slide timeline. I will increase the slide timeline duration manually bit more, so that I’m able to move the audio clip to leave a small gap before and after the audio.
  • You see in previous screenshot that all the object timelines have been extended automatically.  Reason is that all slides in the QSP project have objects timed for the ‘Rest of the Slide’. You can check the Timing Properties.
  • Test the audio, not by using Play Slide but by using a real Preview. This means for a non-responsive project that you need to use F11, Preview HTML5 in Browser. For responsive projects any Preview is fine.

Tip 1: Fix when objects disappear

In most QSP slides, all objects are timed for the Rest of the slide. Result: when the slide duration is increased to fit the audio clip, all object timelines will follow that example. However there are some exceptions, as seen in this screenshot:

You see in this slide (12 from Aspire) that some objects have their timeline stuck at 3 seconds. When playing that slide in Preview, those objects will appear after 3 seconds.

Fix: select the objects. This can be doen by clicking their timelines in combination with CTRL (or SHIFT for sequential timelines). Then use the shortcut key ‘CTRL-E’ to change their timing to ‘Rest of Slide’. You can use the Timing properties of course.

Tip 2: Pause commands on slide event

Many slides in the Aspire project have the setup as is visible on this

This will prevent the audio clip from playing which is very weird.

You have to take out that pause, can safely replace it by ‘No action’ which is the default setting both for On Enter and On Exit. Result will be that the slide will not be paused, but the playhead will move seamlessly to the next slide when the audio has finished. That is IMO the most logical situation. If you want to mimick a presentation, where each slide is paused, you can change the On Exit action to ‘Pause’ or add a button ‘Next’ to pause the slide.

Many slides in the other QSP projects have set the On Exit event to Pause, to mimick a Powerpoint.  It is a personal of course, but I prefer largely to have the playhead proceed to the next slide without learner intervention. If you share that idea, just take out the Pause for the On Exit event.

Tip 3: Pausing points

Contrary to the Pause command (see tip 2) a pausing point will not pause slide audio, although the playhead will be paused and so will all graphics assets/effects.  By default, all pausing points will be set to occur at 1.5secs, either on the slide (for Quiz slides and Drag&Drop) , or after the appearance of the interactive object.

In most cases the increase of the slide duration due to inserting an audio clip will not cause problems, when the pausing point is due to an interactive object like a button. Reason: the button will often trigger an action jumping to another slide, or revealing some content.

Possible issues can occur on quiz slides, if you insert an audio clip.  Here is an example (T/F slide 23 in Alliance):

You see the pausing point on the slide timeline at 1.5secs. Some smart shapes are not timed for the rest of the project (see Tip 1). However since the slide is paused at 1.5secs, before the end of those shape timelines, you’ll not be bothered by that while the audio continues playing (not paused). However, if while testing that slide, you have to wait a long time after the Submit process, and you see the shapes disappearing, that is due to the Actions setup in Quiz Properties. They were set in this slide as shown here:

Both actions are set to ‘Continue’, which means that the playhead is released. It will have to move through all frames after the pausing point before getting to the next quiz slide. Not exactly what you want. Two possible fixes:

Fix 1: as is the case for other quiz slides in the QSP projects, replace the action ‘Continue’ by ‘Go to Next Slide’. You don’t have to bother about the wrongly timed smart shapes in that case.

Fix 2: move the pausing point on the quiz slide with the mouse so that it is near the end of the slide. In that case you have to extend the smart shape timelines as explained in Tip 1. May be a safer workflow if you expect slow connections with the LMS.

Similar problems can occur on Drag&Drop slides but will not expand on that. Know however that the pausing point is not visible on those slides in the Timeline panel.

Some links

If you are a newbie, you may want some more information about the tips explained above. Not surprising, they all link to understanding the Timeline (priority 1 for each Captivate user). Maybe this blog (one of a sequence about the Timeline) can help you:

Pausing Captivate’s Timeline

For quiz slides, the Submit Process is explained in detail in this post:

Submit Process

If you want to add audio clips which are not slide audio:

Pausing Timeline and Audio

The post Adding Narration to Ready-to-Go (QSP) slides appeared first on eLearning.

Memory Game – Part 1: ChemMem

In this post I share my good old fashioned memory game where you check two tiles at a time to try and find a match.

For each match you will receive one point but for each pair you turn over that do not match, you will lose a point. See how good your memory really is.

With each new game – the tiles will be the same but they will be scrambled up a little for you. After ten rounds are you able to have a positive score? The odds might be against you – but if you have a great memory, you can come out on top!

I plan to work on cleaning up the code since it is a bit rough around the edges. I hope to try and make this a bit easier to customize before sharing the project file with you all.

In the meantime – can you go a full 10 rounds by matching Elements from the Periodic Table with their atomic numbers? It might be fun to post screenshots of your best scores to see who has the best memory!


The post Memory Game – Part 1: ChemMem appeared first on eLearning.

Volume Level Control ‘add-on’ for Captivate’s native Skin Playbars – Part 2

This is the piece of JavaScript code that we need to run on slide enter – first slide on the Current Window as a simple ‘Execute JavaScript’ action or as part of an advanced action.

The most important concept in the JavaScript code used here is what is called ‘event listeners’. An event listener is like a sentinel that monitors a particular element and listens for specific events to occur; that is for a particular way this element is interacted with (e.g. if it’s clicked, double-clicked, right-clicked, etc.). When such an event occurs, the event listener ‘does something’; that is he executes a specific predefined chunk of code. On setup of such an event listener one can define:

  • Which element to monitor, including by which means the target element is to be identified
  • For which event on that element to listen to
  • What should happen when the specified event on the specified element is detected

In order to establish which element should be monitored we’ll only use the getElementById method here, which uses the element’s HTML id to identify it. The general syntax for adding event listeners this way is:

document.getElementById(“[id of target element]”).addEventListener(“[event type]”, [do something]);

To learn more about JavaScript event listeners and the events they can detect, e.g. see here.

Let’s go through the code step by step. First, we need to add some event listeners that should be active right away as soon as the first slide loads:

document.getElementById(“AudioOn”).addEventListener(“mouseover”, showVolumeControl);

This monitors the playbar’s Mute button (HTML id: AudioOn), and calls the function showVolumeControl whenever it detects that the user moves the mouse over the button (mouseover). We’ll have a closer look at what the showVolumeControl actually does in a moment, but as the name implies, its main task is to show the by default hidden VolumeControl object on the stage, so it can be interacted with.

document.getElementById(“VolumeControl”).addEventListener(“mouseout”, hideVolumeControl);

This on the other hand monitors the VolumeControl object on the stage once it’s shown, and invokes the function hideVolumeControl when it detects that the user moves the mouse away from it (presumably after he is done setting the volume level). Obviously, this function’s main job is to revert the VolumeControl object on the stage back to its default hidden state. Again, we’ll see that in detail in a minute.

The third event listener in the group at the top of the code (well, actually the middle one in the order I put them in my code. The order doesn’t matter at all though):

document.getElementById(“AudioOn”).addEventListener(“click”, toggleVolumeControl);

This monitors the Mute button again, but this time it listens for mouse clicks, rather than mere mouseover events. Every time the button is clicked, it would call the function toggleVolumeControl. We’ll see later what that function does. Note that the Mute button’s original on-click functionality of muting/ unmuting the sound and toggling the state of the mute button face is not touched by this at all; the event listener for that sits somewhere buried in the playbar’s code and remains fully functional.

Below those addEventListener statements in our code follow the function objects; that is the different sets of instructions that should be carried out when the even listeners detect something.

The first function is showVolumeControl, so what is called when the user moves the mouse over the Mute button. Let’s have a look at the instructions in there.


This is the actual command to show the VolumeControl object on stage.
cp.show(“[Captivate Object Name]”), same as cp.hide(“[Captivate Object Name]”), which we’ll see in a minute, are methods that come with Captivate’s JavaScript API. No need to worry about that though, since the required Captivate JavaScript API Interface Object is loaded automatically on every course load, making those methods available as if they were generic JavaScript methods.

document.getElementById(“CC”).addEventListener(“mouseover”, hideVolumeControl);
document.getElementById(“playbarSlider”).addEventListener(“mouseover”, hideVolumeControl);

Those are event listeners again, similar to those we added at the top of our code, but this time added dynamically when the function is called. They add mouseover sensivity to the Closed Captions button (HTML id: CC) and the Playbar Slide (HTML id: playbarSlider) and call the function hideVolumeControl’. This is to make sure that the Volume Control is hidden again if the learner just incidentally touches the Mute button with the mouse, while he moves the mouse horizontally over the playbar, without any real intention of changing the volume level. The Closed Caption button and the Playbar Slider are the neighboring elements to the mute button on the playbar, so the VolumeControl object will be hidden as soon as the user horizontally moves the mouse out of, and a little bit away from the mute button over those neighboring elements. See Part 1 of this blog for instruction what to do if you don’t want to display Playbar Slider and/ or Close Captions button on your playbar.

document.getElementById(“main_container”).addEventListener(“mouseleave”, hideVolumeControl);

This adds a mouseleave event listener to the main_container element, which is basically the area occupied by whole course on screen, consistent of stage area, project border and playbar. It also calls the function hideVolumeControl’. Same as mouseout, mouseleave fires when the user moves the mouse out of a certain element. We won’t discuss the difference between mouseleave and mouseout here; e.g. see here. In our solution, this assures that the Volume Control, if currently displayed, is hidden as soon as the user moves the mouse completely outside the course window (since he obviously has no intentions to further modify the volume level if he does that).

The second function hideVolumeControl, which is called when the user moves the mouse out of the VolumeControl bounding box, is basically the opposite of showVolumeControl in code syntax, as well as in effect. It reverts everything that showVolumeControl did before.


This is the actual instruction to hide the VolumeControl object on the stage.

document.getElementById(“CC”).removeEventListener(“mouseover”, hideVolumeControl);
document.getElementById(“playbarSlider”).removeEventListener(“mouseover”, hideVolumeControl);
document.getElementById(“main_container”).removeEventListener(“mouseleave”, hideVolumeControl);

These remove the event listeners that were added by the showVolumeControl function before, by using the removeEventListener method. The syntax for the removeEventListener method is exactly the same as for the addEventListener method.

Finally, there’s the toggleVoumeControl function, which is called when the user clicks the Mute button.

This function is a little more complex, in that it performs a little check if a certain condition is met, and then executes different branches, depending on whether or not that’s the case. The condition to check is whether the Mute button on the playbar is currently in unmuted state (whether the statement…

cp.movie.am.muted == false

…is correct).

cp.movie is an object in the CPM.js file, the JavaSript file that basically creates and manages the whole course web page on the fly on run time. It holds a lot of objects that reflect and control the current state of the course; among them the playbar Mute button’s state under cp.movie.am.muted.

On run time the Mute button’s native actions of muting the audio and changing its state to reflect that, will be run first, and only after that our code is run. So if the statement ‘cp.movie.am.muted == false’ happens to be incorrect, it means that audio is currently muted and the Mute button is currently in muted state. If this is the case, we want the Volume Control to be suppressed completely; that is hidden right away and not popping up again as long as sound is muted.

The second branch of our function (everything after else; see toggleVoumeControl function in the code here) takes care of that.


This is the very hideVolumeControl function we discussed above, which is called here from within the toggleVolumeControl function. As we’ve already seen, it hides the Volume Control, as well as removes all event listeners that are not required as long as the VolumeControl object is hidden.

document.getElementById(“AudioOn”).removeEventListener(“mouseover”, showVolumeControl);

This removes the mouseover event listener on the Mute button, assuring that the VolumeControl object is not shown on mouseover on the mute button again, as long as the sound is muted.

When the user clicks the mute button again to unmute the sound and change the button’s state to unmuted again, the condition cp.movie.am.muted == false evaluates to true, so the first branch of our toggleVolumeControl function (everything between the condition cp.movie.am.muted == false and else) is executed. And this does of course again just the opposite:


As we already know, this shows the volume control (as in that moment the user will still be hovering over the Mute button, having just clicked it). It also restores the event listeners to  hide the volume control again when no longer used.

document.getElementById(“AudioOn”).addEventListener(“mouseover”, showVolumeControl);

This adds back in the mouseover event listener on the Mute button that would show the VolumeControl object on subsequent mouseover on the Mute button.

If you check out the function declarations in the full code here, you might noticed that in addition to what we’ve discussed, there’s a couple of statements that read:

//console.log(“[something] fired!”);

Double slash (//) flags an inline comment in JavaScript; that is whatever comes to the right of it is ignored by the browser on run time. Including those lines as executable code in their current position by deleting the two preceding slashes would report out to the browser’s console when the respective function is called, and in the case of toggleVolumeControl which of the two branches is executed. The console can be displayed by bringing up the browser’s developer tools (F12 on most browsers). Helpful sometimes during development and for trouble shooting if something does not work as expected. You can safely delete those lines completely from the code if you want.

The post Volume Level Control ‘add-on’ for Captivate’s native Skin Playbars – Part 2 appeared first on eLearning.



In this blog, I will talk about one aspect of managing fluid box fields in the context of using Captivate assets. I once wrote about filling  buttons with pictures. Thi topic is in some aspects similar, but now we will look at fluid boxes.


Before I start, let me talk a bit about the templates provided by the Adobe Captivate team. They can be used as they are, but in reality, the cases in which you can use them without introducing any changes are rare. And for sure,  the templates are, indeed, manageable. Those of Captivate users who have never used templates are encouraged to give them a try.

If you have never tried the pre-defined templates, your starting point is to click the icon as shown below.

You can toggle between projects and slides. As you will see, there are a few projects and many slides to choose from. They are worth trying, so you are encouraged to play with them.


You will notice that the projects from the “Assets” have some particular pictures in the background. If you click on the fluid box field, you can’t do anything with them. Try yourself. Even if you remove all the elements inserted into a particular fluid box the background seems untouchable. So how to manage them and why aren’t they available after a simple click into the fluid box field?
Well, let’s look at what happens with the elements inserted into a fluid box.
If you drag the graphic items into a fluid box ( from the library or from the chosen folder.) you have direct access to the picture after a click. You can unlock it from a fluid box, click it, delete and so on.

But if you want to use one picture as a background and the other on top of it, it doesn’t work. Graphic elements inserted into fluid boxes are positioned vertically or horizontally, but not on top of each other. And now we will look at how to dig into the background and manage it.


How to apply your own background pictures? Click on the fluid box you want to fill with a picture, next click on the “image” drop-down menu and choose the bottom option – “image fill”.

Now, click on the “fill” box and you will see a panel where you can go to the desired folder and choose the picture you want to use as a background.

Next, you can position the picture with a couple of options from another drop-down menu called “tile type”.

And that’s basically how it works. If you have ever wondered how to place one picture above another in fluid boxes, here is a solution.


Inserting a graphic element directly into a fluid box does not let you create backgrounds. If you want to create one, use the method of filling the fluid box with a picture using the “field” area in the properties panel. Then you can place some other graphic elements or text fields above.

The post FLUID BOX BACKGROUND appeared first on eLearning.

How many characters and messages will the Adobe Connect Virtual Classroom Chat pod hold?

Adobe Connect Meeting Chat Pod supports:

  • Sending a chat message: 1024 characters
  • First time history fetch: 250 messages
    • Upon joining any meeting,  the most recent 250 messages are fetched from the server.
    • If there were more than 250 messages, scrolling up will not display the older messages. To read those older messages, simply choose the option to email the chat content
  • Character limit retained in the chat: 50,000 RTF characters (after which it is trimmed to half or 25,000 characters)
    • The character trimming executes every time the character count reaches 50,000
    • The warning message reappears when the character count reaches 80%  of the limit
    • Emailing the chat while the warning message is still on will email the entire chat thread

For more details about character limits in Adobe Connect Meeting pods, see the following tech-note: Character Limits in Adobe Connect Meeting Pods

The post How many characters and messages will the Adobe Connect Virtual Classroom Chat pod hold? appeared first on eLearning.

Volume Level Control ‘add-on’ for Captivate’s native Skin Playbars – Part 1

I don’t use the built-in Captivate playbars very often, since I don’t particularly like their look and feel, and their lack of options to fine-tune style and functionality. However, for simpler projects I do use them at times.
One of the many limitations, and a thing that has always been bugging me, is the fact that they don’t provide any controls to set audio volume level, other than mute or unmute; something that I’d guess a user simply expects as part of any kind of player controls these days. An accomplished web designer and JavaScript coder might be able to add this feature to the playbars, but given their mentioned overall limitations, this might not really be worth the effort. Anyway, my humble skills are nowhere near sufficient to pull this off. So is there some quick and dirty work around?

Captivate comes with the ‘Volume Control’ Learning Interaction. Like most of these Learning Interactions (aka HTML 5 Widgets), this again has severe limitations with respect to being able to fine-tune style and functionality, but it’s at least something. However, it resides on the stage and as such cannot be part of the playbar. The user experience I had in mind was that of the learner rolling the mouse over the mute button on the playbar and a volume slider pops up, mouse out and it is hidden, like you would have it on the next best video player.

What if one could make Playbar (off-stage) and Learning Interaction (on-stage) kind of talk to one another to make this happen? I toyed around with JavaScript event listeners and tried to throw together a script that would do that. Now I would rate myself still a beginner with JavaScript, so any seasoned coder without any doubt will find my whole approach just hilarious, but hey, it seems to work. See it I action here:


The Volume Control is shown when the mouse is moved over the playbar’s Mute button. Once shown, it can be interacted with on the stage. It is hidden again as soon as the mouse is moved out of its bounding box on stage, to one of the playbar elements next to the mute button, or when the mouse is moved out of the Project area all together (onto the HTML Background area).

If the mute button is clicked to mute sound, the Volume Control is suppressed and would not show on Mute Button mouse over. When the mute button is clicked again to unmute audio, it is restored, and so is the functionality of showing the Volume Control on Mute button mouse over.

To implement this in a Captivate project that makes use of one of the Skin playbars, make sure that ‘Mute’ is checked in the Skin editor, so the Mute button is shown on the playbar.

Add ‘Volume Control’ Learning Interaction to the first slide. I’d advise to disable ‘Show Label’ option in the interaction’s settings, and match ‘Track’ and ‘Base Color’ to the overall Theme/Skin used.

Name it “VolumeControl” and place it at the very bottom of the stage right above where the Mute button will appear on run time, actually with its bounding box protruding downwards out of the stage area (see image below).

You’ll have to play a bit with the position, repeatedly checking in preview, to get it right. On run time the visible part of the volume control interaction should just be touching the playbar below, so it kind of looks like an extension of the latter. I rotated mine by -90° in my example here to make it a vertical volume control, but that’s a matter of personal taste, of course. In fact, it might actually be easier to operate when it’s horizontal.

Set the ‘VolumeControl’ object to display for the rest of the project with ‘Place Object on Top’ checked, but hide it from the output.

On slide enter – first slide, run this JavaScript on the Current Window as a simple ‘Execute JavaScript’ action or as part of an advanced action.

This should work with all playbars that come with captivate, and also with custom-made ones, providing they follow the same naming conventions for their elements.

This example here assumes that the Playbar Slider, as well as the Closed Caption button are displayed on the playbar, with the Mute button in between. In case you don’t display those specific elements on your playbar, you’d have to replace all references to “playbarSlider” in the code with the id of the next button to the left of the Mute button, and all references to “CC” with the id of the next button to the right of the Mute button (see list below).

HTML IDs of Skin Playbar elements:

  • Rewind
  • Play
  • Backward
  • Forward
  • FastForward
  • playbarSlider
  • playbarSliderThumb
  • AudioOn
  • CC
  • Print (available on the ‘Print’ playbar)
  • Exit
  • Info (available on the ‘default’ playbar)
  • playbarLogo (available on the ‘default’ playbar)
  • playbarBkGrnd (non-interactive background area of the playbar)

Known limitations/ issues:

  • The options to size and style the ‘Volume Control’ Interaction are very limited. The overall shape cannot be altered and even attempts to simply resize it might lead to unexpected results. Theme colors are not honored and must be selected manually in a rather painful process. This alone wouldn’t be so bad if it would come by default in a design that would look less generic and had a better operability.
  • I didn’t try any of this on a responsive project. No point I guess, since it’s all based on ‘mouseover’ events, which don’t exist on a touchscreen device anyway.
  • The ‘Volume Control’ Interaction does not reflect any externally invoked volume level changes; say through advanced actions or js, and always defaults to a level of 100%. So if you e.g. wanted to set the volume to a default level of 75% for your course, you can do that through actions on slide enter first slide, but once the Volume Control is invoked first time, it will not reflect that and show 100% level nonetheless.
  • Although ‘Place Object on Top’ is checked, some other objects on the stage might end up being rendered on top of the ‘Volume Control’ Interaction in the stacking order on run time, if those also by definition or by configuration are to be placed ‘On Top’, like e.g. other objects timed for the rest of the project and placed ‘On Top’, ‘Master Slide Objects On Top’, Overlay Slides, etc.  I also noticed that additional objects added to an object state sometimes mess up stacking order. Not sure if this would be a problem here though. At any rate, in such cases the volume control would pop up partly or fully behind those objects and might not be operational.

So one would be far better off with some custom Volume Control, e.g. built from SmartShapes or something. If only there where Slider Objects in Captivate! Maybe in a Part 3 of this blog sometime…

And of course:
There might well be a much more elegant and simple way to achieve all this with js. I’d be happy to learn.

Stay tuned for Part 2 of this blog, which will discuss the JavaScript code used for this solution.

The post Volume Level Control ‘add-on’ for Captivate’s native Skin Playbars – Part 1 appeared first on eLearning.

How Retails are Solving Customer Experience Challenges through Digital Transformation?

With the Industrial Revolution 4.0, we are experiencing a wave of Digital Transformation. This wave of technology seeks to deliver greater value to processes across problems and solutions that confer positive outcomes as regards Business Efficiency, Customer Experience, and/or Business Innovation.

Various technologies, such as SMAC, deliver a functional view of the marketplace to various organizations, thus helping them glean crucial business perspectives. The disruptive nature of these technologies helps business record, process as well as deliberate and analyze data to reach important insights, which are essential to understanding customer sentiments as well as trends and Chatbot For Business.

All this, as you might imagine, goes in the service of creating more effective customer experiences and related peripheral processes.

When it comes to the retail market, retailers are looking not just for ways to enhance marginal parts of the customer experience, but are trying to reinvent the wheel, in a manner of speaking. The processes they are today seeking cover not just streamlining processes or improving efficiencies while reducing costs specific to certain transactional cycles.

These processes must also include within their ambit customer centricity, agility, data intelligence, innovation, as well as foresighted value propositions.

With specific regard to customer experiences and technology, it is important that retailers understand how to make the best use of the available as well as emerging technologies. Here are some of the ways to make it happen:


Through the use of data, it becomes possible for retailers to provide a more personalized customer experience to their users. This includes a wide range of services within its ambit, whether it is ads that cater specifically to their taste, or recommending them new goods and services that they are likely to be interested in, based on their earlier purchases. This impacts the overall customer experience in multiple ways, as it leads the user to engage more with the offerings they find available for them. And as is known in all customer-driven industries, user engagement is key to long-lasting success.

Ease of Use

By leveraging the power of digital transformation, a lot of retailers have been able to make the process of buying things simpler for their customers. Whether it is due to a UI that’s more intuitive, or due to functional changes that make it simpler to navigate a maze of similarly-grouped products, at the end of the day, customer experience has been drastically shifted by today’s changing technology.

Post-Purchase Servicing 

It is not only the presale journey that has been significantly altered for a customer, it also the journey that happens if anything goes wrong with a purchased product. The process of requesting for customer service as well as getting a response from the concerned team has been significantly improved due to technological advances like token systems, AI chatbot solutions, and more. As a result of these tech transformations, it is also easier for retailers to build loyal customers, as they feel more important if their queries are answered on time and dealt with in a competent manner.

We at Affle Enterprise grasp your business challenges & ideate a personalized user experience to solve complex business problems through our mobile app development services and Chatbot Development Company . Connect with our team for a quick digital transformation consultation.



The post How Retails are Solving Customer Experience Challenges through Digital Transformation? appeared first on eLearning.

Xamarin : The Next Big Thing in Mobile App Development.

Since Microsoft announced a global collaboration with Xamarin at the beginning of this year, the mobile app development world took notice of the relatively anonymous kin of the popular mobile app development across platforms.

To the layman Xamarin is just another platform for developing apps in C#. If you ask a techie he’ll beam at  you before he goes on and on about how Xamarin has changed mobile app development and why is it the next big thing.  If you are remotely interested in mobile apps, read on, we are about to tell you why Xamarin!

The buzz word with Xamarin is shared code.

What exactly is shared code?

Let’s say you are building the same application across platforms, Android, iOS and Windows and you want to write the code just once, is it possible? Of course it is. That’s what most  mobile development platforms these days are trying to achieve. Increasing the re-usability of the code by adding native wrapper classes or native codes for integration with the native libraries is characteristic to any mobile app development platform. Xamarin claims that 75% of its code can be shared throughout different OS. If user interfaces are built with Xamarin.Forms, the shared code percentage could go well over 90%. Needless to say, this could be a huge time-saver!

Mobile app development for windows 10 OS
Xamarin for Windows 10

It means that as a developer, your effort is reduced by a significant amount. It means that if you as a client are paying an app developer the cost of development goes down for you as well.

What Do All Products fall under Xamarin?

Xamarin. The platform, Xamarin. Form, Xamarin Studio, Xamarin. Mac, Xamarin for Visual Studio, Xamarin Test Cloud, Robo VM, and  .Net Mobility Scanner are all the products that are enlisted under Xamarin (now acquired by Microsoft)

.Net Mobility Scanner

Xamarin’s .Net Mobility Scanner lets developers see how much of their .NET code can run on other operating systems, specifically Android, iOS, Windows Phone, and Windows Store. It is a free Web-based service that uses Silverlight

The Xamarin Advantage for Enterprise Apps:

  • Why Xamarin has become a favorite with the rise in enterprise app development is no surprise!
  • Xamarin is supported by Mobile Backend as a Service (MBaaS) providers. MBaaS systems provide mobile-optimized cloud backend system and enterprise backend connectors, making development work easier for enterprise mobile application development
  • Kidozen, a major player in the MBaas market provides private and public cloud-based backend for mobile applications. It also provides enterprise backend connectors. KidoZen’s SDK is available on the Xamarin component store, allowing Xamarin-based mobile applications to connect with various backend systems, using a very small amount of code.
  • SAP has collaborated with Xamarin , making  enterprise mobility for enterprises running SAP software accessible.
  • Salesforce SDK is available for free on the Xamarin components store
  • IBM has made available its MobileFirst SDK through the Xamarin component store.
  • Microsoft Azure mobile service connectors are available for Xamarin, making it easier for enterprise mobile applications to store non-sensitive application data in the Azure cloud.

    App Types that you can make with Xamarin

    Mobile App Development types using Xamarin
    All That You Can Do with Xamarin

    Finance, healthcare, enterprise, utility. Anything that you name can be built using Xamarin. Xamarin is infact the best of both worlds. The re-usability of HTML5 and the native code ability of java, javascript and Objective C are all covered in this one technology.

    For more on the Technology stay tuned. Our own developers are pretty keen to talk more on Xamarin and how it helps them build the awesome apps that they do! We are coming up with a sequel to this write up soon where we discuss the advantages of Xamarin over other platforms and the kind of apps that have been built using the platform.

    Tell us more about your experiences with Xamarin as a developer or a client in the comments  thread below.

The post Xamarin : The Next Big Thing in Mobile App Development. appeared first on eLearning.

How a Bot can Replace your HR Department by 2020?

We are all increasingly aware of the reality that robots play a major role in our lives now. Whether it is in physical form – as a helper during surgeries – or in software, as in a chatbot

Chatbot For Hospitality, we are all surrounded by a bevy of machines that influence our lives in very significant ways. That’s why it should come as no surprise that even the HR department of various organizations are considering using bots to replace their actual human resources!

The irony is perhaps stark in this context, but it soon starts to make sense. Gartner, for instance, has predicted that 80% of all end-to-end customer interactions will be replaced by interaction with chatbots – and there will be no humans involved in it at all. The truth is that chatbots have already started to change the HR industry, and it is only in a strictly projective sense that their impact may be considered hypothetical. More realistically, though, they have already made their mark. If we consider the future, here are some of the ways in which they are likely to create further impact:

Mobile Apps will be Gone

It is predicted that the market for mobile apps will become quite saturated, given their current popularity. This leads to the prediction that 50% more companies will put their money into chatbots over mobile apps, as it will result in cost savings of around 2.5 billion hours by 2023, combining both the time savings of customers as well as businesses. Not only this, but it will also lead to monetary savings of around $8 billion. The global market insight for this is that the overall market value of chatbots is expected to reach $1.3 billion by 2025.

Recruitment bots will become more important

Some companies will seek to replace their HR function with bots completely – this is a practical decision as it will enable them to save time on manual tasks such as scheduling interviews, screening candidates, and answering miscellaneous queries. A chatbot can easily manage all of these tasks and do them in a much faster turnaround time than a human. This process also allows for all forms of discrimination to be addressed, since a bot has no favoritism. The only thing that remains to be seen is whether a recruitment bot will be sensitive enough to process all the nuances in an individual that it is screening.

Employee support will be given by therapy bots

Usually, there are no places for an employee to seek refuge if something goes wrong in the workplace – but therapy bots will seek to bridge this gap. The idea behind them is to make mental health accessible and to break down barriers to entry when it comes to communication. Dedicated chatbots

Chatbot For Ecommerce to address emotional situations are already being created – and there is every reason to believe that these bots will receive positive reception in the future, considering the widespread de-stigmatization of mental health that is now being seen. These bots will either function independently or work as stepping stones to other mental health activities.

We at Affle Enterprise grasp your business challenges & ideate a personalized user experience to solve complex business problems. Connect with our team at enterprise@affle.com for a quick digital transformation consultation.



The post How a Bot can Replace your HR Department by 2020? appeared first on eLearning.