IT INPUTS IN
SOCIAL SCIENCE TEACHING
UNIT 4
IT INPUTS IN
SOCIAL SCIENCE TEACHING
4 .1.COMPUTER AIDED LEARNING
Computer-based
education (CBE) and computer-based instruction (CBI) are the broadest terms and
can refer to virtually any kind of computer use in educational settings.
Computer-assisted instruction (CAI) Computer Aided Instruction (CAI) is a
narrower term and most often refers to drill-and-practice, tutorial, or
simulation activities. Computer-managed instruction (CMI) Computer-managed
instruction is an instructional strategy whereby the computer is used to
provide learning objectives, learning resources, record keeping, progress
tracking, and assessment of learner performance. Computer based tools and
applications are used to assist the teacher or school administrator in the
management of the learner and instructional process.
Computer assisted instruction
(CAI)
A self-learning
technique, usually offline/online, involving interaction of the student with
programmed instructional materials. Computer-assisted instruction (CAI) is an
interactive instructional technique whereby a computer is used to present the
instructional material and monitor the learning that takes place. CAI uses a
combination of text, graphics, sound and video in enhancing the learning
process. The computer has many purposes in the classroom, and it can be
utilized to help a student in all areas of the curriculum. CAI refers to the
use of the computer as a tool to facilitate and improve instruction. CAI
programs use tutorials, drill and practice, simulation, and problem solving
approaches to present topics, and they test the student's understanding. It is widely accepted that the
integration of modern Information and Communication Technologies (ICT) into the
teaching learning process has great potential. In fact, it could be the most
important way by which states can meet their educational aspirations within
reasonable time and resources. The use of computers in Elementary schools
is basically vision as a teaching and learning aid besides to develop computer
literacy amongst the children. Computer aided learning will help us to make the
present teaching learning process joyful, interesting and easy to understand
through audio-visual aids. Teachers will be resourced with Multimedia Contents
to explain topics better. Overall it will help us to improve quality of
education in long learn.
Usages of
CAI
Development
of multimedia based educational content
The Multimedia Based Educational Content will be developed
in local languages/mediums besides English on the identified hard spots in
Science, Mathematics and English subjects. These will be used for improved and
enhance teaching-learning process in classrooms. Concepts hard to visualize,
simulations and dynamic processes will be explained through good and effective
graphics, sound, animations and video clips based on imaginative analogies and
components locally available, commonly noticed by the children in real life.
The developed Multimedia Based Educational Content with an
aim to help acquire knowledge, reinforce learning, and will go beyond to
include conceptual clarity of the knowledge acquired. Adequate levels of visualization
will be achieved through extensive use of graphics, simulation of laboratory
models or experiments, animation, and good quality audio & video clippings.
The interactivity provided by ICT will be effectively used to bridge gap
between active learning and passive teaching and make learning a more
interesting and enriching experience
Teacher training
The teacher-training programme has also been planned with an
aim to providing exposure and familiarization to computer and multimedia based
technology tools that can be productively used to improve and enhance teaching
learning in the classroom:
Computer literacy
Focus will be also in inducing
Computer Literacy in these selected schools as a byproduct. Usage and hands-on
sessions on basic applications for teachers and also for selected students will
be tried to incorporate during the later phase of the project.
Terminologies of computer assisted instruction
·
Computer Assisted Instruction (CAI)
·
Computer Aided Instruction (CAI)
·
Computer Assisted Learning (CAL)
·
Computer Based Education (CBE)
·
Computer Based Instruction (CBI)
·
Computer Enriched Instruction (CEI)
·
Computer Managed Instruction (CMI)
New terminology
·
Web Based Training
·
Web Based Learning
·
Web Based Instruction
Types of computer assisted
instruction
1.
Drill-and-practice -Drill and practice provide
opportunities or students to repeatedly practice the skills that have
previously been presented and that further practice is necessary for mastery.
2.
Tutorial -Tutorial activity includes both the
presentation of information and its extension into different forms of work,
including drill and practice, games and simulation.
3.
Games -Game software often creates a contest to
achieve the highest score and either beat others or beat the computer.
4. Simulation
-Simulation software can provide an approximation of reality that does not
require the expense of real life or its risks.
5. Discovery-Discovery
approach provides a large database of information specific to a course or
content area and challenges the learner to analyze, compare, infer and evaluate
based on their explorations of the data.
6.
Problem Solving This approach helps children
develop specific problem solving skills and strategies.
CAI provides
·
Text or multimedia content
·
Multiple-choice questions
·
Problems
·
Immediate feedback
·
Notes on incorrect responses
·
Summarizes students' performance
·
Exercises for practice
·
Worksheets and tests.
Advantages of CAI
·
One-to-one interaction
·
Great motivator
·
Freedom to experiment with different options
·
Instantaneous response/immediate feedback to the
answers elicited
·
Self pacing - allow students to proceed at their
own pace
·
Helps teacher can devote more time to individual
students
·
Privacy helps the shy and slow learner to learns
·
Individual attention
·
learn more and more rapidly
·
Multimedia helps to understand difficult
concepts through multi sensory approach
·
Self directed learning – students can decide
when, where, and what to learn
Limitations of CAI
·
May feel overwhelmed by the information and
resources available
·
Over use of multimedia may divert the attention
from the content
·
Learning becomes too mechanical
·
Non availability of good CAI packages
·
Lack of infrastructure
The critical role of teacher guidance and support
The term
‘independent learning’ was commonly but inconsistently used in the interviews
and its implications for pedagogy were sometimes unclear. Within the context of
the increased level of individual or small group teacher-pupil interaction
reported, independence from the teacher (but not peers) was clearly implied.
While this was apparently motivating and most teachers mentioned that more
pupils were ‘on task’ when using ICT, in many cases it was the more able
students who ‘achieved well with little teacher input’ (FC). Self-directed work
could make it harder to keep a low ability group on task (KE). The emphasis in
teacher accounts shifted between pupil control and technical proficiency, and
freeing up the teacher (‘there were very few people who I really had to tutor
in going through the tasks’: DD). However the notion of ‘independent learning’
is misleading since teachers continually emphasized the importance of their
guiding and supportive role and a widely shared view (by 7 teachers) expressed
that this kind of teacher input was essential when pupils were using ICT, even
in the context of more independent working.
In fact, in most
cases it was the same teachers who reported taking a facilitating role yet less
pupil reliance on teacher intervention. For instance, one teacher described his
new role as one of helping children find information for themselves, with
prompts but largely under their own control; he subsequently commented in the
light of experience that “that traditional teacher role of helping them to
understand it and put it in… context, is back” (FC). In some cases, pupil
control and choice were very limited in practice despite an ‘independent
learning’ setup. By contrast, two teachers recognized that too much open-endedness
had proved confusing for pupils. These cases highlighted the importance of
teachers being ‘quite active’ in guiding
pupil activity to pre-empt floundering or off-task wandering.
Interpreting the findings as a whole seems to point to the conclusion that it
was easier in ICT supported lessons for most pupils to work without constant
direction and intervention but that the teacher’s support and facilitation of
learning remained of paramount importance (particularly for lower achieving
pupils). Several teachers recognized this and expressed a desire for a balance between teacher direction and providing opportunities for “pupil-centered”
learning. The central issue here was summarized by the reflection of one
English teacher (YL) on his attempts to balance between being over-directive
(providing more security but limiting imagination and risking similar task
outcomes) and under-directive (providing opportunity for independent learning
but risking confusion about task requirements).
Promoting active student participation,
experimentation and independent thinking
A newly emerging
role for teachers involved encouraging
active participation in ICT-supported activity (described in eight
interviews and one further case report).
This built upon their belief that
using ICT can enhance learning and motivation through the opportunities
it provides for self regulated, active
learning, i.e. for knowledge building rather than transmission, and for pupils working at their own pace.
Exploiting these using strategies involving “little adult intervention” and pupil freedom to choose their
methods of working and find “as much information as they like or as little” (OT) “meant that they could
be discoverers rather than followers” (RA). In contrast, traditional “chalk and talk” lessons involved more teacher
direction and “spoon feeding” according
to six interviewees. Teachers
generally considered themselves to be supporting
student-regulated learning through
facilitating information finding and developing understanding, e.g. by providing
opportunities for experimentation,
reflection and analysis. The emerging strategy (in eight cases) was one of prompting pupils with the aim of encouraging them to think for themselves and
find their own solutions
Evaluation of ICT
Conventional approaches
Conventional
approaches to impact assessment focus on whether a project has met its stated
objectives and contributed to the achievement of the overall project
goals. This approach uses criteria of project relevance, efficiency,
effectiveness, impact and sustainability and looks at both intended and
unintended impact. Most of the ICT projects tend to follow this
method. While this method can be a cost-effective method, the
following demerits often make an evaluation a ceremonial exercise.
Conventional
approaches have been more donor-focused and donor driven. The donor becomes the
key client, providing financial support and defining the terms of references
for the evaluation. The evaluation criteria are laid down by the donor, making
it impossible for the beneficiaries to participate. There is no attempt made to
learn the lessons from the project. More often than not, the evaluation is
carried out more to fulfill a management and accountability requirement than to
respond to project needs. An expert is hired or contracted out to conduct
the evaluation, and in some cases the project staffs who are very close to the
programme conduct some user interviews and fulfill the obligation to involve
local project personnel. The expert more often than not does not have a clue
about the cultural, economic and political settings of the beneficiaries. There
is a pre-supposition that the programme was successful. Data is collected to
determine whether the project met the overall goals and objectives, and a
report is produced. An attempt is made to invent success stories and evidences
to prove the usefulness of the project.
Evaluation does
not necessarily find the project as a failure, even if in reality if it was so.
In most cases, stakeholders or beneficiaries play a very passive role,
providing information but not participating in the evaluation itself. There is
hardly any communication between the donor and the beneficiary. The
exercise is a linear one, leaving a two-way interaction between the donor and
the evaluator.
Evaluation based
on conventional approaches, if not properly administered, can become a one-way
linear process, with little or no feedback to the project. In an ICT
project, project recipients and all stakeholders should be involved in
understanding the internal dynamics of their project, its successes and
failures, and in proposing solutions for overcoming the obstacles and utilizing
the ICTs in context.
The growth of the
ICT sector is very fast in that there are new solutions found everyday to the
practical problems faced on the ground. The factors that affect the
projects are often centered on the user behavior to the technology which may
vary from place to place according to the social setting. This makes it
difficult for any evaluator to understand these complexities in the social
context. Hence, it is important to mix a number of evaluation tools
and techniques that suit the context.
Participatory approaches
Participatory
evaluations in ICT projects should primarily be oriented to the information
needs of the programme stakeholders. The scope of participants
should include all stakeholders, beneficiaries and non-beneficiaries of the
programme. This will result in finding the reasons for not participating
in the programme. Participant negotiations are very important to reach a
consensus on evaluation findings, and to solve problems and make plans to
improve performance. Views from all participants should be sought as more
powerful stakeholders can undermine the others in a group. This situation
can be avoided and the role of evaluator in this approach becomes that of a
facilitator. Many ICT projects suffer from a lack of understanding of the
project aims, objectives and concepts by all the stakeholders. New
technologies, such as the Internet can often be difficult to rationalize and
care is needed to prevent some people from becoming marginalized due to their
lack of understanding of the technology.
The following
participatory evaluation framework can be incorporated into ICT programmes for
enterprises development with necessary arrangements by programme staff and
their collaborators, including government offices, NGOs and community
members. It may consist of four basic principles
·
Pre-planning
and preparation
·
Generating
evaluation questions
·
Data-gathering
and analysis
·
Reflection
and action
4.2. PRESENTATION SOFT WARE
A
presentation program is a software package used to
display information in the form of a slide show. It has three
major functions: an editor that allows text to be inserted and formatted, a
method for inserting and manipulating graphic images, and a slide-show system
to display the content.
Early
presentation graphics software ran on computer workstations, such as those
manufactured by Trollman, Genigraphics, Autographix,
and Dicomed. It became
quite easy to make last-minute changes compared to traditional typesetting and paste-up.
It was also a lot easier to produce a large number of slides in a small amount
of time. However, these workstations also required skilled operators, and a
single workstation represented an investment of $50,000 to $200,000 (in 1979
dollars).
In
the mid-1980s developments in the world of computers changed the way
presentations were created. Inexpensive, specialized applications now made it
possible for anyone with a PC to create professional-looking presentation
graphics.
Originally
these programs were used to generate 35 mm slides, to be presented using a
slide projector. As these
programs became more common in the late 1980s several companies set up services
that would accept the shows on diskette and create
slides or print transparencies. In the 1990s dedicated LCD-based screens that could be placed on
the projectors started to replace the transparencies, and by the late 1990s
they had almost all been replaced by video projectors.
The first
commercial computer software specifically intended for creating WYSIWYG
presentations was developed at Hewlett Packard in 1979 and called BRUNO and
later HP-Draw. The first software displaying a presentation on a personal
computer screen was VCN Execu Vision,
developed in 1982. This program allowed users to choose from a library of
images to accompany the text of their presentation.
Features
A presentation
program is supposed to help both the speaker with an easier access to his ideas
and the participants with visual information which complements the talk. There
are many different types of presentations including professional
(work-related), education, entertainment, and for general communication.
Presentation programs can either supplement or replace the use of older visual
aid technology, such as pamphlets,
handouts, chalkboards, flip charts, posters, slides and overhead
transparencies. Text, graphics, movies, and other objects are positioned on
individual pages or "slides" or "foils". The
"slide" analogy is a reference to the slide projector, a device that has become somewhat
obsolete due to the use of presentation software.
Slides can be printed, or (more usually) displayed on-screen and navigated
through at the command of the presenter. Transitions between slides can be
animated in a variety of ways, as can the emergence of elements on a slide
itself. Typically a presentation has many constraints and the most important being
the limited time to present consistent information.
Many presentation
programs come with pre-designed images (clip art) and/or have the ability to import
graphic images, such as Visio and Edraw Max. Some tools also have the ability to
search and import images from Flickr or Google directly from the tool. Custom graphics can
also be created in other programs such as Adobe Photoshop or Adobe Illustrator and then exported. The concept
of clip art originated with the image library that
came as a complement with VCN ExecuVision, beginning in 1983.
With the growth of
digital photography
and video, many programs that handle these types of
media also include presentation functions for displaying them in a similar
"slide show" format. For example, Apple's i Photo allows groups of digital photos to be
displayed in a slide show with options such as selecting transitions, choosing
whether or not the show stops at the end or continues to loop, and including
music to accompany the photos.
Similar to
programming extensions
for an operating system
or web browser, "add ons" or plugins for
presentation programs can be used to enhance their capabilities. For example,
it would be useful to export a PowerPoint presentation as a Flash
animation or PDF document.
This would make delivery through removable media or sharing over the Internet
easier. Since PDF files are designed to be shared regardless of platform and
most web browsers already have the plug-in to view Flash files, these formats
would allow presentations to be more widely accessible.
Certain
presentation programs also offer an interactive integrated hardware element
designed to engage an audience (e.g. audience response systems, second screen applications) or facilitate
presentations across different geographical locations through the internet
(e.g. web conferencing).
Other integrated hardware devices ease the job of a live presenter such as laser pointers and interactive whiteboards
Using slideshows
to accompany lectures is a popular teaching method, both with professors and
students. Slides offer something visual for students to look at while
listening to the lecture and often professors will make slides available either
before or after the lecture as an aid for review. For instructors, having
accompanying slides can help keep a lecture on track, making sure you don't
miss any key points, and can serve as guideposts throughout the lecture so that
you know how much material you have left to cover.
For years,
Microsoft PowerPoint has been the go-to software for computer-based slideshows;
but lately several other dogs have entered the fight. This page will
highlight some of the popular presentation software’s available, including key
features and any points of hesitation.
Microsoft PowerPoint
PowerPoint Support Page: from this page
of Microsoft's support website, you can search for answers to your questions,
find articles that will help you get started with PowerPoint, or even find
web-based training sessions on PowerPoint. (Note that the default
articles and training sessions are for the new 2013 Microsoft programs. If
you're still using PowerPoint 2010 or earlier, scroll to the bottom of the page
to get articles and training for those versions.)
Features
·
Embed
and edit video within a slide
·
Embed
audio or voice over your PowerPoint presentation
·
Add
bookmarks to media files to pause or enhance media at designated points
·
Microsoft-designed
themes and animations to bring your slides to life
·
User-friendly
- relatively intuitive design and layout
·
Comes
with Microsoft Suite, so likely to already be at your fingertips (versus other
programs that you might have to create accounts for, etc.)
·
The
new PowerPoint 2013 will allow you to create a Microsoft Live account so that
you can store your presentations in the cloud and work on them anywhere
·
Operating
system-specific; viewers must have Microsoft Office or a program that can read
Microsoft files to view show
·
Linear
design for presentations limits the conceptual capabilities for presentations
on non-linear subjects
Keynote
Keynote Support Page: from this page of Apple's
support website, you can search for answers to your questions in the user
forums, download the Keynote User's Guide, or read how-to articles on various
Keynote features.
Features
·
Built
in narration tool
·
Powerful
tools for adding and editing graphics and other media files
·
Apple-designed
themes and animations to bring your slides to life
·
Keynote
app for iPad and iPhone has surprisingly similar functionality and ease-of-use
as the software itself
·
Intuitively
similar to PowerPoint - linear, slide format
·
Integration
with mobile Apple devices - Keynote Remote on iPad or iPhone allows you to control
your presentation from the palm of your hand
·
Easy
format conversions - import PowerPoint slides into Keynote, and vice versa;
save your Keynote in other formats such as a QuickTime movie or PDF
·
When
importing PowerPoint slides and vice versa, some features, such as particular
fonts, may not translate exactly due to the differences in the programs
·
As
an Apple product, Keynote is not available for PCs
- Linear design for presentations limits the conceptual capabilities for presentations on non-linear subjects
Prezi
Prezi Online Manual: from this page of the Prezi
website, you can access articles and tutorials on subjects ranging from basics,
like learning the Prezi interface, to advanced features, like collaborating on
Prezi presentations. The menu in the gray column on the right highlights
other features of the manual, such as most popular articles and user forums.
Features
·
Better
depicts the complexity and interrelatedness of material; contrasted with the
linearity of PowerPoint or Keynote
- Better displays complex, non-linear ideas
- Done properly, Prezis tend to be very visually appealing
- Web-based - not specific to an operating system and able to be edited from any computer with internet access
- Not nearly as intuitive to use
- Less easy to import audio/video/graphics
- Transitions, especially the zooming features, can cause queasiness for viewers
- The free version offers no privacy settings for your presentations - everyone will be able to see them; however, if you use your .edu email address to set up your account, you can get the "Enjoy" package for free or a reduced price on the "Pro" package; both offer better customization and privacy options.
Slide rocket
Slide Rocket Support: from this page of Slide
Rocket’s customer care website, you can search for answers to your questions,
find answers to basic questions under Getting Started, and learn about more
advanced features under Go Further. You can also contact Customer Service
from the right-hand column.
Features
·
Simple
editing interface, reminiscent of Adobe Photoshop, with feature menus on both
the left and right
- Style similar to PowerPoint (linear representation)
- Web-based - you don't need your computer or a memory device to work on your presentation; you can access and edit it from any computer by logging into your account on the Slide Rocket website.
- With a style similar to PowerPoint, the learning curve is small.
- While it can export as PPT file to be played through PowerPoint, many features of the Slide Rocket presentation may not translate in PowerPoint. There is a way to export it as exe file in Windows or Mac OS so that you can play it on your computer, but this too can have problems. Slide Rocket is a web-based program. It works best if you link to its published form on the Slide Rocket website, but that requires making it publicly viewable.
- With your .edu email address, you can sign up for the Pro version (all the bells and whistles) of Slide Rocket for free.
- When you set up your account, they'll send you a confirmation email before you can start using it - if you don't get the email, check your Junk Mail box.
Linux – open office impress
Open Office Impress, a part of the Open Office suite
package and created by Sun Microsystems,
is a presentation program
similar to Microsoft PowerPoint.
In addition to being able to create PDF
files from presentations, it is also able to export presentations to SWF files,
allowing it to be played on any computer with a Flash player installed. It is able to view, edit
and save files in many file
formats, including the .ppt format,
which is used by Microsoft PowerPoint.
Impress is distributed (spread) under an open source license so people can download it as free software. It released under the terms of the Apache License. Open Office Impress users can
install the Open
Clip Art Library, which adds a large amount of images for general
presentation and drawing projects. Linux distributions
Debian, Gentoo, Mandriva and Ubuntu have given the ready-to-use open clipart package for download and
install from their online software
repositories. impress creates exciting slideshow presentations, similar to
PowerPoint. Impress can turn presentations into flash files and PDFs. You can
even open and edit your existing PowerPoint files with Impress.
Open Office
Impress is a presentation software program that is part of a suite of programs
from OpenOffice.org, available as a free download. Open Office Impress uses a
graphical approach to presentations in the form of slide shows that accompany the oral delivery of
the topic. This program can be effectively used in business and classrooms. Open Office Impress is one of the
simplest computer programs to learn. If you are at all familiar with Microsoft
PowerPoint, then you will be right at home with this program. Anyone can create
stunning presentations that look like they were designed by a professional. An
added bonus is that you can open and use presentations that you have already
created previously in PowerPoint.
4.3. PREPARATION OF ‘e CONTENT’
Information technology and the
Internet are major drivers of research, innovation, growth and social change.
The growth in Internet has brought changes in all walks of life including the
education. E-content includes all kinds of content created and delivered
through various electronic media from „ old media‟ such as print and radio to
the increasingly sophisticated electronic tools with combination of sounds,
images and text. E-content requires huge amounts of creativity both at
'information' level as well as the 'technology' level
Learning
object design
Learning
Management Systems (LMSs)are web-based application platforms used to
plan, implement, and assess learning processes related to online and offline training, administration and
performance management. LMS are defined as systems to manage learners, keeping
track of their progress and performance across all types of learning
activities. LMSs provide an instructor a way in which to create and
deliver content, monitor learners' participation, and assess learners'
performance. In fact many institutions, the Learning
Management System may have one or two content-authoring tools. The
content-authoring
tool is software used to create multimedia content
for delivery on the World Wide Web
Instructional
design
Instructional design is a
systematic, repetitive process of activities aimed at creating a solution for
an instructional problem. The steps involved in instructional design are;
setting an instructional goal; goal analysis; learning domains; learning
outcomes; prepare criterion referenced test questions and a clear instructional
strategy
The leaning domains are verbal
information, intellectual skills, psychomotor skills and attitudes. The
instructional strategies may be drill and practice, tutorials, simulations and
educational games.
Types of Content-authoring tools
The
Content-authoring tools are different in nature are; SCORM, AICC, PROMETEUS,
ARIADNE, ADL, AASL and LTSC. (i)SCORM(Sharable Courseware Object
Reference Model), is a set of specifications that, when applied to course
content produces small reusable e-Learning objects;
(ii)AICC(Aviation Industry Computer-Based
Training Committee), is an international association of technology- based
training professionals that develops teaching guidelines for the aviation
industry. It apply to the development, delivery, and evaluation of e-content
training courses via technology; (iii) PROMETEUS (Promoting Multimedia Access
to Education and Training in European Society) established with a clear
underlying ideal to promote access to knowledge, education and e-content
training for all European citizens; (iv)ARIADNE is a European Union project
focusing on the development of tools for producing, managing, and
reusing computer based pedagogical elements in University
of
Switzerland; (v) ADL (Advanced Distributed Learning Initiative) is
a program from the US Department of Defense and the White House Office of
Science and Technology, to develop guidelines needed for efficient and
effective e-content learning; (vi) AASL (American Association of School
Librarians) has formulated the Information Literacy Standards for Student Learning
and it concentrates the student, teacher and administrator; (vii) LTSC
(Learning Technologies Standard Committee)has prepared technical standards and
guidelines for the use of e-content components in Education and it is an
internationally accredited Computer Society Standards Activity Board founded by
the Institute of Electrical and Electronics Engineers (IEEE)
Models
of e-content development
The e-content development models are
available in five different ways and they are as follows; An instructional
design model by Kemp (1977) defined nine different components and adopted a
continuous update with evaluation; (ii) teaching of media in systematic
approach model by Vernon & Donald (1980) compared the different instruction
design models; (iii) A Systematic Design of Instruction model by Dick
&Carey (1990) described all the phases of process starts with instructional
goals and ends with summative evaluation; (iv) Systems Reusable Information
Object Strategy by CISCO (1999) consists of six content items viz.,
introduction; importance; objectives; pre-requisites; scenario; and
outline with Learning Management System (LMS) and (v)Content based model by
Cornea (2005) explained the learning objectives of a content and the content’s
accessibility and reusability between various Learning Content Management
System (LCMS).
Phases
of e-content development
In e-content development aspects
consists of six phases viz., analysis, design, development, testing,
implementation and evaluation.
The
analysis phase
It is the most important as it
identifies area’s in our current situation. This phase accountability
considered by the views of subject experts, target audiences, objectives and
its goals. In this phase, we must know the audience, and their skill, budget
of the e-content, delivery methods and its constraints with due dates.
The
design phase
It involves the complete design of
the learning solution. It helps to plan of an e-content preparation. In this
phase, we must know the planning, use of relevant software; required skills;
creative and innovative interactions of subject contents like texts, pictures,
videos and suitable animations.
The
development phase
It concerns the actual production of
the e-content design. It helps to create the e-content by mixing of texts,
audio, video, animations, references, blogs, links, and MCQs (multiple choice
questions) with some programming specifications like home, exit, next etc.
The
testing phase
It helps to administer the e-content in the
actual educational field. In this phase, we must test the spelling mistakes,
content errors, clarity of pictures, relevant videos, appropriate audios,
timing of animations, and hyperlinks.
The
implementation phase
It helps to administer the e-content
to the target audience. This phase explains how to install and how to use it
and their difficulties experienced while using e-content. It checks the product
accuracy and quality maintenance.
The
evaluation phase
It helps to satisfy the e-content
and its effectiveness. This phase considers feedback from both learners and
instructors. After the feedback reactions, the e-content is designed again
as post-production for effective delivery of e-content
Instructor’s
role in the development of e –content
A competent instructor in e-content
is, one who effectively and efficiently accomplishes a task in a given digital
context, using appropriate knowledge, skills, attitudes, and abilities that
have adjusted with in a time and their needy competencies(Varvel, 2007). Also
International Board of Standards for Training, Performance and Instruction
organization developed the standard competencies for instructors in e-content
development in the following domains: (a) professional foundations,
(b) planning and preparation, (c) instructional methods and strategies,
(d) assessment and evaluation, and (e) management. The following diagram
illustrated the development of a multi media material
Characteristics
of e-content development
According to Anurag Saxena (2011)
explained the possible methods of educational e-contents conversions are
viz., (i) learning by doing and learning by investigation; (ii) learning by
using themes; (iii) learning by testing / evaluation; (iv)learning by
simulation and (v) learning by role-playing. As per the UGC (University Grants
Commission, India) guidelines of e-content development needs the following
categories viz., (i) home; (ii) objectives; (iii) subject mapping; (iv)
summary; (v) text with pictures & animations; (vi) video and audio;
(vii)assignments, quiz & tutorial; (viii) references, glossary & links;
(ix) case studies; (x) FAQ‟s; (xi) download; (xii) blog and (xiii) contact.
These categories are arranged sequentially by subject experts along with
technical supporters and to develop the e-content materials. E-learning is a
process and E-content is a product. E-content is generally designed to guide
students through lot of information in a specific task.
An e-content package can be
used as a teacher in the virtual classroom situations. The quality
of learning depends not only on the form of how the process is carried out
but also on what content is taught and how the content is presented. This
approach of teaching has become an answer to the complicated problems and
un-identified areas. In a class room, technology stimulates the learner and
gets the learner involved in the learning. Books are an extension of brain;
video is an extension of eye; audio is the extension of an ear; audio
conferencing is the extension of mind & vocal cord; computer is an
extension of fusion on mind, hands & eyes; satellite technology is an
extension of human reach and computer network is an extension of human
co-operation. So what we would expect from e-contents that it should be able to
stimulate the learner in such a way that we utilizes the maximum of its
potential in learning (Vijayakumari, 2011) e-content is valuable to the pupil
and also helpful to teachers for all individual instruction systems; e-content
is the latest method of instruction that has attracted more attention to gather
with different concepts. The ultimate aim of the e-content is abolish the
disparity among the learners through effective education. E-content is
facilitating to the teacher to effective manner. It is enhancing the learner
knowledge level which leads to creative thinking and it gives the future ideas
on the basis of given links, and references.
E-learning comprises all forms of
electronically supported learning and teaching. The Information and
communication systems whether networked learning or not, serves specific media
to implement the learning process. It may be classified as Online and Offline.
The online learning occurred through, e-forum, SMS / MMS, Search engines, Meta
search engines, e-dictionaries, e-books and e-journals. Whereas the off-line
learning occurred through MS Office applications, power-point presentations,
downloaded documents and CD ROMs.
Parts
of e content
Module
In software,
a module is a part of a program. Programs are composed of one or more
independently developed modules that are not combined until the program is linked.
A single module can contain one or several routines
. In hardware, a module is a self-contained component.
A separable component, frequently one that is interchangeable with others, for
assembly into units of differing size, complexity, or function. A selected unit
of measure, ranging in size from a few inches to several feet, used as a basis
for the planning and standardization of building materials.
Objective
A specific result that a person or system aims to achieve within a time frame and with available resources. In general, objectives are more specific and easier
to measure than goals. Objectives are basic tools that underlie all planning and strategic activities. They serve as the basis for creating policy and evaluating performance. Some examples of business objectives include minimizing expenses, expanding internationally, or making a profit. Neutral (bias free), relating to, or based on verifiable evidence or facts instead of on attitude, belief, or opinion. Opposite of subjective.
Glossary
A glossary, also known as a vocabulary, or clavis, is an alphabetical list of terms in a particular domain of knowledge
with the definitions for those terms. Traditionally, a
glossary appears at the end of a book and includes terms within that
book that are either newly introduced, uncommon, or specialized. While
glossaries are most commonly associated with non-fiction books, in some cases,
fiction novels may come with a glossary for unfamiliar terms. A bilingual
glossary is a list of terms in one language defined in a second language or glossed
by synonyms (or at least near-synonyms) in another
language. In a general sense, a glossary
contains explanations of concepts
relevant to a certain field of study or action. In this sense, the term is
related to the notion of ontology.
Automatic methods have been also provided that transform a glossary into ontology
or a computational lexicon.
Quiz
A quiz is a form of game
or mind sport in which the players (as individuals or
in teams) attempt to answer questions correctly. In some countries, a quiz is
also a brief assessment
used in education and similar fields to measure growth in knowledge, abilities,
and/or skills. Quizzes are usually scored in points and many quizzes are
designed to determine a winner from a group of participants - usually the
participant with the highest score. The Oxford
English Dictionary attests the use of the verb quiz to mean "to question or
interrogate", with a reference from 1843: "She com back an' quiesed us", which could be a
clue to its origin. Quiz as a test could be a corruption of the Latin qui es, meaning "Who are
you?" The American
Heritage Dictionary says it may be from the English dialect
verb quiset, meaning "to
question". In any case it is probably from the same root as question and inquisitive.
FAQ
Frequently asked questions (FAQ) or Questions and Answers (Q&A),
are listed questions and answers, all supposed to be commonly asked in some
context, and pertaining to a particular topic. The format is commonly used on
email mailing lists and other online forums, where certain common questions
tend to recur. "FAQ" is pronounced as either an initialism (F-A-Q) or an acronym. Since the acronym FAQ originated in textual media, its pronunciation varies; "F-A-Q". Depending
on usage, the term may refer specifically to a single frequently asked
question, or to an assembled list of many questions and their answers. Web page
designers often label a single list of questions as a "FAQ", such as
on Google.com, while using "FAQs" to denote multiple lists of
questions such as on United States Treasury sites. Here “FAQ" is an Internet textual tradition originating from the
technical limitations of early mailing lists from NASA
in the early 1980s. The first FAQ developed over several pre-Web years starting
from 1982 when storage was expensive. In practice, this rarely happened and the
users tended to post questions to the mailing list instead of searching its
archives. Repeating the "right" answers becomes tedious, and went
against developing netiquette.
Summary
A
summary means to write something in short like shortening a passage or a write
up without changing its meaning but by using different words and sentences. the
act of reducing a written work, typically a book, into a shorter form. A short
document or section of a document, produced for business purposes, which
summarizes a longer report or proposal or a group of related reports, in such a
way that readers can rapidly become acquainted with a large body of material
without having to read it all. A summary is not a rewrite of the original piece and does not have
to be long nor should it be long. To write a summary, use your own words to express briefly the main idea and
relevant details of the piece you have read. Your purpose in writing the summary is to give the basic ideas of
the original reading. A
comprehensive and usually brief abstract, recapitulation, or compendium of
previously stated facts or statements.
We need innovative work in e-content
material as a form of digital literacy in educational settings particularly to
investigate the implications of new forms of social networking, knowledge
sharing and knowledge building. And finally, because of the pervasive
nature of e-content as a digital technology, the commercial interest that is
invested in it and the largely unregulated content of Internet based sources;
we also needs begin to sketch out what a critical digital literacy might look
like. There is, in short, plenty to be done if we are to prepare children
and young people to play an active and critical part in the digital future.
4.4. VIDEOCONFERENCING
Videoconferencing
(or video conference) means to conduct a conference
between two or more participants at different sites by using computer
networks to transmit audio and video data.
A point-to-point
(two-person) video conferencing system
works much like a video telephone. Each participant has a video camera,
microphone, and speakers mounted on his or her computer. As the two
participants speak to one another, their voices are carried over the network
and delivered to the other's speakers, and whatever images appear in front of
the video camera appear in a window
on the other participant's monitor.
Multipoint videoconferencing allows three or more participants to sit in a virtual
conference room and communicate as if they were sitting right next to each
other.
Videoconferencing (VC) is the
conduct of a videoconference
(also known as a video conference
or video teleconference) by a
set of telecommunication technologies which allow two or more locations to communicate by
simultaneous two-way video and audio transmissions. It has also been called
'visual collaboration' and is a type of groupware.
Videoconferencing differs from videophone
calls in that it's designed to serve a
conference or multiple locations rather than individuals. It is an intermediate
form of video telephony, first used commercially in Germany during the late-1930s
and later in the United States during the early 1970s as part of AT&T's development of Picture
phone technology. A
videoconference is a live connection between people in separate locations for
the purpose of communication, usually involving audio and often text as well as
video. At its simplest, videoconferencing provides transmission of static
images and text between two locations. At its most sophisticated, it provides
transmission of full-motion video images and high-quality audio between
multiple locations. A videoconference can be thought of as a phone call with
pictures - Microsoft refers to that aspect of its NetMeeting package as a
"web phone" - and indications suggest that videoconferencing will
someday become the primary mode of distance communication.
With the introduction of relatively
low cost, high capacity broadband telecommunication services in the late 1990s, coupled with powerful computing
processors and video compression techniques, videoconferencing has made significant inroads
in business, education, medicine and media. Until the mid 90s, the hardware costs made
videoconferencing prohibitively expensive for most organizations, but that
situation is changing rapidly. Many analysts believe that videoconferencing
will be one of the fastest-growing segments of the computer industry in the
latter half of the decade
Videoconferencing
uses audio and video telecommunications to bring people at different sites
together. This can be as simple as a conversation between people in private
offices (point-to-point) or involve several (multipoint) sites in large rooms at
multiple locations. Besides the audio and visual transmission of meeting
activities, allied videoconferencing technologies can be used to share
documents and display information on whiteboards. TV channels routinely use
this type of video telephony when reporting from distant locations. The news
media were to become regular users of mobile links to satellites using specially equipped trucks, and
much later via special satellite videophones in a briefcase.
Videoconferencing
systems throughout the 1990s rapidly evolved from very expensive proprietary
equipment, software and network requirements to a standards-based technology
readily available to the general public at a reasonable cost.
Finally, in the
1990s, Internet Protocol-based
videoconferencing became possible, and more efficient video compression
technologies were developed, permitting desktop, or personal computer
(PC)-based videoconferencing. While videoconferencing technology was initially
used primarily within internal corporate communication networks, one of the
first community service usages of the technology started in 1992 through a
unique partnership with PictureTel and IBM Corporations which at the time were
promoting a jointly developed desktop based videoconferencing product known as
the PCS/1. Over the next 15 years, Project DIANE (Diversified Information and
Assistance Network) grew to utilize a variety of videoconferencing platforms to
create a multi-state cooperative public service and distance education network
consisting of several hundred schools, neighborhood centers, libraries, science
museums, zoos and parks, public assistance centers, and other community
oriented organizations.
In the 2000s, video telephony was popularized via free Internet
services such as Skype and i-Chat,
web plugins and on-line telecommunication programs that promoted low cost,
albeit lower-quality, videoconferencing to virtually every location with an
Internet connection. Technological developments by videoconferencing developers
in the 2010s have extended the capabilities of video conferencing systems
beyond the boardroom for use with hand-held mobile devices that combine the use of
video, audio and on-screen drawing capabilities broadcasting in real-time over
secure networks, independent of location. Mobile collaboration
systems now allow multiple people in previously unreachable locations, such as
workers on an off-shore oil rig, the ability to view and discuss issues with
colleagues thousands of miles away. Traditional videoconferencing system
manufacturers have begun providing mobile applications as well, such as those
that allow for live and still image streaming.
Technology
The core
technology used in a videoconferencing system is digital compression of audio
and video streams in real time. The hardware or software that performs compression is called a codec
(coder/decoder). Compression rates of up to 1:500 can be achieved. The resulting
digital stream of 1s and 0s is subdivided into labeled packets,
which are then transmitted through a digital network of some kind (usually ISDN
or IP). The use
of audio modems in the transmission line allow for the use
of POTS, or the
Plain Old Telephone System, in some low-speed applications, such as video telephony, because they convert the digital
pulses to/from analog waves in the audio spectrum range.
The other components required for a
videoconferencing system include:
·
Audio input: microphones, CD/DVD player,
cassette player, or any other source of Pre Amp audio outlet.
·
Computer: a data processing unit that ties
together the other components, does the compressing and decompressing, and
initiates and maintains the data linkage via the network.
·
Kinds of videoconferencing systems
There are basically two kinds of
videoconferencing systems:
Dedicated
systems
have all required components packaged into a single piece of equipment, usually
a console with a high quality remote
controlled
video camera. These cameras can be controlled at a distance to pan left and
right, tilt up and down, and zoom. They became known as PTZ cameras. The
console contains all electrical interfaces, the control computer, and the
software or hardware-based codec. Omni directional microphones are connected to
the console, as well as a TV monitor with loudspeakers and/or a video projector.
There are several types of dedicated
videoconferencing devices
Large
group videoconferencing is non-portable, large, more expensive devices used for
large rooms and auditoriums. Small group videoconferencing is non-portable or
portable, smaller, less expensive devices used for small meeting rooms. Individual
videoconferencing are usually portable devices, meant for single users, have
fixed cameras, microphones and loudspeakers integrated into the console.
Desktop
systems
are add-ons (hardware boards or software codec) to normal PCs and laptops,
transforming them into videoconferencing devices. A range of different cameras
and microphones can be used with the codec, which contains the necessary codec
and transmission interfaces. Most of the desktops systems work with the H.323 standard.
Videoconferences carried out via dispersed PCs are also known as e-meetings.
These can also be nonstandard, Microsoft Lync, Skype for Business, Google
Hangouts, or Yahoo Messenger or standards based, Cisco Jabber.
WebRTC
Platforms
are video conferencing solutions that are not resident by using a software
application but is available through the standard web browser. Solutions such
as Adobe Connect and Cisco WebEx can be accessed by going to a URL sent by the
meeting organizer and various degrees of security can be attached to the
virtual "room". Often the user will be required to download a piece
of software, called an "Add In" to enable the browser to access the
local camera, microphone and establish a connection to the meeting.
Conferencing layers
The components
within a Conferencing System can be divided up into several different layers:
User Interface, Conference Control, Control or Signal Plane, and Media Plane.
User Interfaces (UI) can be either
graphical or voice responsive. Many in the industry have encountered both types
of interfaces, and normally graphical interfaces are encountered on a computer.
User interfaces for conferencing have a number of different uses; they can be
used for scheduling, setup, and making a video call. Through the user interface
the administrator is able to control the other three layers of the system.
Conference Control performs
resource allocation, management and routing. This layer along with the User
Interface creates meetings (scheduled or unscheduled) or adds and removes
participants from a conference.
Control
(Signaling) Plane contains the stacks that signal different endpoints to create
a call and/or a conference. Signals can be, but aren’t limited to, H.323
and Session Initiation
Protocol (SIP) Protocols. These signals control incoming and
outgoing connections as well as session parameters.
The Media Plane
controls the audio and video mixing and streaming. This layer manages Real-Time
Transport Protocols, User Datagram Packets (UDP) and Real-Time Transport
Control Protocol (RTCP). The RTP and UDP normally carry information
such the payload type which is the type of codec, frame rate, video size and many
others. RTCP on the other hand acts as a quality control Protocol for detecting
errors during streaming.
Multipoint videoconferencing
Simultaneous
videoconferencing among three or more remote points is possible by means of a Multipoint Control Unit
(MCU). This is a bridge that interconnects calls from several sources (in a
similar way to the audio conference call). All parties call the MCU, or the MCU
can also call the parties which are going to participate, in sequence. There
are MCU bridges for IP and ISDN-based videoconferencing. There are MCUs which
are pure software, and others which are a combination of hardware and software.
An MCU is characterized according to the number of simultaneous calls it can
handle, its ability to conduct transposing of data rates and protocols, and
features such as Continuous Presence, in which multiple parties can be seen
on-screen at once. MCUs can be stand-alone hardware devices, or they can be
embedded into dedicated videoconferencing units.
The MCU consists of two logical components:
A
single multipoint controller (MC), and
Multipoint
Processors (MP) sometimes referred to as the mixer.
The MC controls
the conferencing while it is active on the signaling plane, which is simply
where the system manages conferencing creation, endpoint signaling and
in-conferencing controls. This component negotiates parameters with every
endpoint in the network and controls conferencing resources. While the MC
controls resources and signaling negotiations, the MP operates on the media
plane and receives media from each endpoint. The MP generates output streams
from each endpoint and redirects the information to other endpoints in the conference.
Some systems are
capable of multipoint conferencing with no MCU, stand-alone, embedded or
otherwise. These use a standards-based H.323 technique known as
"decentralized multipoint", where each station in a multipoint call
exchanges video and audio directly with the other stations with no central
"manager" or other bottleneck. The advantages of this technique are
that the video and audio will generally be of higher quality because they don't
have to be relayed through a central point. Also, users can make ad-hoc
multipoint calls without any concern for the availability or control of an MCU.
This added convenience and quality comes at the expense of some increased
network bandwidth, because every station must transmit to every other station
directly.
Videoconferencing modes
Videoconferencing systems use
several common operating modes:
·
Voice-Activated
Switch (VAS);
·
Continuous
Presence.
In VAS mode, the
MCU switches which endpoint can be seen by the other endpoints by the levels of
one’s voice. If there are four people in a conference, the only one that will
be seen in the conference is the site which is talking; the location with the
loudest voice will be seen by the other participants.
Continuous
Presence mode, displays multiple participants at the same time. The MP in this
mode takes the streams from the different endpoints and puts them all together
into a single video image. In this mode, the MCU normally sends the same type
of images to all participants. Typically these types of images are called “layouts”
and can vary depending on the number of participants in a conference.
Echo cancellation
A fundamental
feature of professional videoconferencing systems is Acoustic Echo Cancellation (AEC). Echo can be
defined as the reflected source wave interference with new wave created by
source. AEC is an algorithm
which is able to detect when sounds or utterances reenter the audio input of
the videoconferencing codec, which came from the audio output of the same
system, after some time delay.
If unchecked, this can lead to several problems including:
·
The
remote party hearing their own voice coming back at them (usually significantly
delayed)
·
Howling
created by feedback.
Echo cancellation is a
processor-intensive task that usually works over a narrow range of sound
delays.
Cloud-based video conferencing
Cloud-based video
conferencing can be used without the hardware generally required by other video
conferencing systems, and can be designed for use by SMEs, or
larger international companies like Face book. Cloud-based systems can handle either
2D or 3D video broadcasting. Cloud-based systems can also implement mobile
calls, VOIP, and other forms of video calling. They can also come with a video
recording function to archive past meetings.
Technical and other issues
Computer security experts have shown that poorly
configured or inadequately supervised videoconferencing system can permit an
easy 'virtual' entry by computer hackers
and criminals into company premises and corporate boardrooms, via their own
videoconferencing systems. Some observers argue that three outstanding issues
have prevented videoconferencing from becoming a standard form of
communication, despite the ubiquity of videoconferencing-capable systems. These
issues are:
Eye
contact: Eye contact plays a large role in conversational turn-taking, perceived attention and intent, and
other aspects of group communication. While traditional telephone conversations
give no eye contact cues, many videoconferencing systems are arguably worse in
that they provide an incorrect impression that the remote interlocutor is
avoiding eye contact. Some tale-presence systems have cameras located in the
screens that reduce the amount of parallax observed by the
users. This issue is also being addressed through research that generates a
synthetic image with eye contact using stereo reconstruction.
Telcordia Technologies, formerly Bell
Communications Research, owns a patent for eye-to-eye videoconferencing using
rear projection screens with the video camera behind it, evolved from a 1960s
U.S. military system that provided videoconferencing services between the White House and various other government and
military facilities. This technique eliminates the need for special cameras or
image processing.
Appearance
consciousness:
A second psychological problem with videoconferencing is being on camera, with
the video stream possibly even being recorded. The burden of presenting an
acceptable on-screen appearance is not present in audio-only communication.
Early studies by Alphonse Chapanis found that the addition of video actually
impaired communication, possibly because of the consciousness of being on
camera.
Signal
latency: The information
transport of digital signals in many steps needs time. In a telecommunicated
conversation, an increased latency (time lag) larger than about 150–300 ms
becomes noticeable and is soon observed as unnatural and distracting.
Therefore, next to a stable large bandwidth, a small total round-trip time is another
major technical requirement for the communication channel for interactive
videoconferencing.
The issue of eye-contact may be
solved with advancing technology, and presumably the issue of appearance
consciousness will fade as people become accustomed to videoconferencing.
Impact on education
Videoconferencing
provides students with the opportunity to learn by participating in two-way
communication forums. Furthermore, teachers and lecturers worldwide can be
brought to remote or otherwise isolated educational facilities. Students from
diverse communities and backgrounds can come together to learn about one
another, although language barriers
will continue to persist. Such students are able to explore, communicate,
analyze and share information and ideas with one another. Through
videoconferencing, students can visit other parts of the world to speak with
their peers, and visit museums and educational facilities. Such virtual
field trips can provide enriched learning opportunities to students,
especially those in geographically isolated locations, and to the economically
disadvantaged. Small schools can use these technologies to pool resources and
provide courses, such as in foreign languages, which could not otherwise be
offered.
A few examples of benefits that
videoconferencing can provide in campus environments include:
·
faculty
members keeping in touch with classes while attending conferences;
·
guest
lecturers brought in classes from other institutions;
·
researchers
collaborating with colleagues at other institutions on a regular basis without
·
loss
of time due to travel;
·
schools
with multiple campuses collaborating and sharing professors;
·
schools
from two separate nations engaging in cross-cultural exchanges;
·
faculty
members participating in thesis defenses at other institutions;
·
administrators
on tight schedules collaborating on budget preparation from different parts of
campus;
·
faculty
committee auditioning scholarship candidates;
·
researchers
answering questions about grant proposals from agencies or review
·
committees;
·
student
interviews with an employers in other cities, and
The
intangible benefits include the facilitation of group work among geographically
distant teammates and a stronger sense of community among business contacts,
both within and between companies. In terms of group work, users can chat,
transfer files, share programs, send and receive graphic data, and operate
computers from remote locations. On a more personal level, the face-to-face
connection adds non-verbal communication to the exchange and allows
participants to develop a stronger sense of familiarity with individuals they
may never actually meet in the same place.
4.5.
LEARNING OBJECT
A learning object is "a collection of content items, practice
items, and assessment items that are combined based on a single learning
objective". The term is credited to Wayne Hodgins when he created a
working group in 1994 bearing the name though the concept was first described
by Gerard in 1967. Learning objects go by many names, including content
objects, chunks, educational objects, information objects, intelligent objects,
knowledge bits, knowledge objects, learning components, media objects, reusable
curriculum components, nuggets, reusable information objects, and reusable
learning objects, testable reusable units of cognition, training components,
and units of learning.
Learning objects offer a new
conceptualization of the learning process: rather than the traditional
"several hour chunk", they provide smaller, self-contained, re-usable
units of learning. They will typically
have a number of different components, which range from descriptive data to
information about rights and educational level. At their core, however, will be
instructional content, practice, and assessment. A key issue is the use of
metadata. Learning object design raises issues of portability, and of the
object's relation to a broader learning management system. “The learning
object remains an ill-defined concept, despite numerous and extensive
discussion in the literature. At a very general level, a learning object could
be defined as a pedagogical resource.
We
suggest the following very global definition: A learning object is a resource. This definition is not very
operational, but at least compatible with learning design models that
usually distinguish between resources (of various sorts), services (tools) and
learning activities (scenarios) as the building blocks for educational designs.
Tools may of course include learning objects. Also, student productions may
become learning objects and that idea goes beyond student projections of
contents. “It appears unlikely that any of existing
definitions can serve to align communities with diverse perspectives around any
common understanding leading to advancement in education and learning outcomes
through technology integration.” Instead
of a single detailed definition, Churchill (2007) defines a learning object as “a learning object is a
representation designed to afford uses in different educational contexts”. He then
proposes a typology of several kinds of learning objects which then could be
defined in more precise terms
Chiappe
defined Learning Objects as: "A digital self-contained and reusable
entity, with a clear educational purpose, with at least three internal and
editable components: content, learning activities and elements of context. The
learning objects must have an external structure of information to facilitate
their identification, storage and retrieval: the metadata. The following definitions focus on the
relation between learning object and digital media. RLO-CETL, a British
inter-university Learning Objects Center, defines "reusable learning
objects" as "web-based interactive chunks of e-learning designed to
explain a stand-alone learning objective". Daniel Rehak and Robin Mason
define it as "a digitized entity which can be used, reused or referenced
during technology supported learning".
Adapting a
definition from the Wisconsin Online Resource Center, Robert J. Beck suggests
that learning objects have the following key characteristics:
Learning
objects are a new way of thinking about learning content. Traditionally,
content comes in a several hour chunk. Learning objects are much smaller units
of learning, typically ranging from 2 minutes to 15 minutes.
·
Are
self-contained – each learning object can be taken independently
·
Are
reusable – a single learning object may be used in multiple contexts for
multiple purposes
·
Can
be aggregated – learning objects can be grouped into larger collections of
content, including traditional course structures
·
Are
tagged with metadata – every learning object has descriptive information
allowing it to be easily found by a search
Components
·
The following is a list of some of the types of
information that may be included in a learning object
·
Life
Cycle, including: version, status
·
Instructional
Content, including: text, web pages, images, sound, video
·
Glossary
of Terms, including: terms, definition, acronyms
·
Quizzes
and Assessments, including: questions, answers
·
Rights,
including: cost, copyrights, restrictions on Use
·
Relationships
to Other Courses, including prerequisite courses
·
Educational
Level, including: grade level, age range, typical learning time, and
difficulty.
·
Typology
as defined by Churchill (2007): presentation, practice, simulation, conceptual
models, information, and contextual representation
One
of the key issues in using learning objects is their identification by search
engines or content management systems. This is usually facilitated by assigning
descriptive learning object metadata. Just as a book
in a library has a record in the card catalog, learning
objects must also be tagged with metadata. The most important pieces of
metadata typically associated with a learning object include:
·
objective: The educational objective the learning
object is instructing
- prerequisites: The list of skills (typically represented as objectives) which the learner must know before viewing the learning object
- topic: Typically represented in a taxonomy, the topic the learning object is instructing
- Interactivity: The Interaction Model of the learning object.
- Technology requirements: The required system requirements to view the learning object.
- Raw Content-The most fine-granular level consists of raw media elements including media types like text, audio, illustration, animation and others.
- Reusable Information Object- From raw media elements, information objects are formed. They describe a certain procedure, process or structure, define a concept, present a fact, or provide an overview on some subject.
- Reusable Learning Object- The third aggregation layer combines information objects circumscribed by a learning objective. The objects at this level are called learning objects.
Types of learning object
Presentation object
Direct
instruction and presentation resources designed with the intention to transmit
specific subject matter.
Practice object
Drill
and practice with feedback, educational game or representation that allows
practice and learning of certain procedures
Simulation object
Representation
of some real-life system or process
Conceptual model
Representation of a key concept or
related concepts of subject matter
4.6.
IHMC CONCEPT MAP TOOLS
What is concept map?
A
concept map or conceptual diagram is a diagram that depicts
suggested relationships between concepts. It is a
graphical tool that designers, engineers, technical writers, and others use to organize and
structure knowledge. A concept map
typically represents ideas and information as boxes or circles, which it
connects with labeled arrows in a downward-branching hierarchical structure.
The relationship between concepts can be articulated in linking phrases such as
causes, requires, or contributes
to. The technique for visualizing these relationships among different
concepts is called concept mapping.
Concept maps define the ontology of computer systems, for example with
the object-role modeling or Unified Modeling Language formalism.
A
concept map is a way of representing relationships between ideas, images, or words in the same way
that a sentence diagram represents the
grammar of a sentence, a road map represents the locations of highways and
towns, and a circuit diagram represents the
workings of an electrical appliance. In a concept map, each
word or phrase
connects to another, and links back to the original idea, word, or phrase.
Concept maps are a way to develop logical thinking and study skills by
revealing connections and helping students see how individual ideas form a
larger whole. An example of the use of concept maps is provided in the context
of learning about types of fuel.
Concept
maps were developed to enhance meaningful learning in the sciences. A well-made
concept map grows within a context
frame defined by an explicit "focus question", while a mind
map
often has only branches radiating out from a central picture. Some research
evidence suggests that the brain stores knowledge as productions
(situation-response conditionals) that act on declarative memory content, which is also referred
to as chunks or propositions. Because concept maps are constructed to reflect
organization of the declarative memory system, they facilitate sense-making and
meaningful learning on the part of individuals who make concept maps and those
who use them.
A
concept map or conceptual diagram is a diagram that depicts
suggested relationships between concepts. It is a
graphical tool that designers, engineers, technical writers, and others use to organize and
structure knowledge. A concept map
typically represents ideas and information as boxes or circles, which it
connects with labeled arrows in a downward-branching hierarchical structure.
The relationship between concepts can be articulated in linking phrases such as
causes, requires, or contributes
to. The technique for visualizing these relationships among different
concepts is called concept mapping.
Concept maps define the ontology of computer systems, for example with
the object-role modeling or Unified Modeling Language formalism.
CMAP
TOOLS
CMAP Tools is a project for creating concept maps. Developed by the Florida
Institute for Human and Machine Cognition.The
IHMC Cmap Tools program empowers users to construct, navigate, share and
criticize knowledge models represented as concept maps. It allows users to,
among many other features; construct their Cmaps in their personal computer. Concept
maps are graphical tools for organizing and representing knowledge. They include
Concepts, usually enclosed in circles or boxes of some type, and relationships
between concepts indicated by a connecting line linking two concepts. Words on
the line, referred to as linking words
or linking phrases, specify the relationship between the two concepts. BAIJU AYYAPPAN K
ASSISTANT PROFESSOR IN SOCIAL SCIENCE
CUTEC CHALAKUDY
No comments:
Post a Comment