Translate this Blog

Traduisez , Übersetzen Sie, Traduca , Traduza , Traduzca , 翻訳しなさい , 번역하다 , 翻译 , يترجم

Monday, February 28, 2011

How to embed a swf file in excel/word?

Using Office XP, Excel or Word

From the 'View' menu select 'Toolbars' and tick the 'Control Toolbox'

On the 'Control Toolbox' toolbar click on the 'More controls' icon

A list of controls will be displayed. Scroll down until you find the 'Shockwave Flash Object' and then click on it.


 

Excel

This should change your cursor to a crosshair. Move to the area on the worksheet where you want to inset the 'Shockwave Flash Object'.
Left click, hold and drag to create a box of the required size.

Word

Word will automatically insert the control where the cursor is.
It's size can be set by dragging the edges or via it's 'Properties'


 

Next right click on the control you have just inserted and select 'Properties'.

Set the following properties

  • Autoload = True
  • EmbedMovie = True
  • Enabled = True
  • Loop = True
  • Playing = True
  • Visible = True
  • Movie = c:\flash.swf (Change this to the location of your .swf file)

Close the 'Properties' control

Save the file.

Close the file.

Reopen the file.

The .swf file should start playing automatically.

If you liked this post, Dont forget to BOOKMARK it for others as well. Please CLICK your favorite SOCIAL BOOKMARKING ELEMENT:

StumpleUpon Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google

Sunday, February 27, 2011

Don’t grieve. Anything you lose comes round in another form.

By Jalal-Uddin Rumi

Don't grieve. Anything you lose comes round in another form.

Friends are enemies sometimes, and enemies friends.

I was a tiny bug. Now a mountain. I was left behind. Now honored at the head. You healed my wounded hunger and anger, and made me a poet who sings about joy.

If your guidance is your ego, don't rely on luck for help. you sleep during the day and the nights are short. By the time you wake up your life may be over.

Let the beauty we love be what we do.

Let the lover be disgraceful, crazy, absent-minded. Someone sober will worry about events going badly. Let the lover be.

Let yourself be silently drawn by the stronger pull of what you really love.

Most people guard against going into the fire, and so end up in it.

My friend, the sufi is the friend of the present moment. To say tomorrow is not our way.

Nightingales are put in cages because their songs give pleasure. Whoever heard of keeping a crow?

No longer a stranger, you listen all day to these crazy love-words. Like a bee you fill hundreds of homes with honey, though yours is a long flight from here.

No mirror ever became iron again; No bread ever became wheat; No ripened grape ever became sour fruit. Mature yourself and be secure from a change for the worse. Become the light.

Only from the heart Can you touch the sky.

Patience is the key to joy.

People of the world don't look at themselves, and so they blame one another.

Since in order to speak, one must first listen, learn to speak by listening.

That which is false troubles the heart, but truth brings joyous tranquility.

The intelligent want self-control; children want candy.

The middle path is the way to wisdom.

The only lasting beauty is the beauty of the heart.

Thirst drove me down to the water where I drank the moon's reflection.

To praise is to praise how one surrenders to the emptiness.

We come spinning out of nothingness, scattering stars like dust.

We rarely hear the inward music, but we're all dancing to it nevertheless.

You think the shadow is the substance.


 

Jalal-Uddin Rumi (1207-1273) — Turkish Sufi Mystic Poet

If you liked this post, Dont forget to BOOKMARK it for others as well. Please CLICK your favorite SOCIAL BOOKMARKING ELEMENT:

StumpleUpon Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google

Saturday, February 26, 2011

How to Develop Assessment Tools

This article was created by a professional writer and edited by experienced copy editors, both qualified members of the Demand Media Studios community. All articles go through an editorial process that includes subject matter guidelines, plagiarism review, fact-checking, and other steps in effort to provide reliable information.

Articles by

By JOHN S, eHow Contributor

updated: January 13, 2011

An assessment is a diagnostic process that measures an individual's behaviors, motivators, attitudes and competencies. Assessment tools comprise various instruments and procedures. These tools are largely used in educational institutions, nonprofit organizations and in the corporate world too. The success of designing and developing assessment tools is brought about by using scientific methods to do so.

Instructions


  • 1Develop assessment tools with the candidates to be assessed in mind. Different scenarios call for different tools and modes of evaluation. Ensure that the instruments and procedures for assessing are relevant to the audience, the skills and the task that they are being evaluated for.
  • 2 Set benchmarks. According to the "Business Dictionary," a benchmark is a "standard, or a set of standards, used as a point of reference for evaluating performance or level of quality." Take into account all the factors, attributes and competencies that you want to measure and improve on. Ensure that the benchmarks you establish are specific and operational. Operational benchmarks will also help you carry out realistic improvements after the assessment.
  • 3 Establish methods for gathering evidence. Assessment tools are functional only to the extent that they are able to gather cognitive, behavioral and statistical outputs of those being assessed. Consider designing a tool that clearly indicates the competencies, skills, attributes and behaviors of candidates against the benchmarks. These tools may include comprehensive questionnaires, SWOT (Strength, Weakness, Opportunities and Threats) analysis and diagnostic models.
  • 4 Adhere to the principles of assessment, which include the following:

    -- Validity: The extent to which evidence gathered can be supported.

    -- Reliability: Consistency that the tools used for one set of candidates can be used to assess other candidates of the same competencies and generate the same results.

    -- Flexibility: Allowing the candidates ample time to understand the terms of the assessment.

    -- Fairness: Criteria should not discriminate against an individual or group of candidate.

  • 5 Establish a method for assessing and evaluating outcomes against the benchmarks. Effective assessment tools should be able to interpret the outcome of the measurements. Depending on the purpose of the assessment, consider using the three major forms of evaluation: goal-based, outcome-based, and process-based.


Read more: How to Develop Assessment Tools | eHow.com
http://www.ehow.com/how_7771843_develop-assessment-tools.html#ixzz1F3Z6JsW8

If you liked this post, Dont forget to BOOKMARK it for others as well. Please CLICK your favorite SOCIAL BOOKMARKING ELEMENT:

StumpleUpon Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google

Types of Reliability

You learned in the Theory of Reliability that it's not possible to calculate reliability exactly. Instead, we have to estimate reliability, and this is always an imperfect endeavor. Here, I want to introduce the major reliability estimators and talk about their strengths and weaknesses.

There are four general classes of reliability estimates, each of which estimates reliability in a different way. They are:

  • Inter-Rater or Inter-Observer Reliability
    Used to assess the degree to which different raters/observers give consistent estimates of the same phenomenon.
  • Test-Retest Reliability
    Used to assess the consistency of a measure from one time to another.
  • Parallel-Forms Reliability
    Used to assess the consistency of the results of two tests constructed in the same way from the same content domain.
  • Internal Consistency Reliability
    Used to assess the consistency of results across items within a test.

Let's discuss each of these in turn.

Inter-Rater or Inter-Observer Reliability

Whenever you use humans as a part of your measurement procedure, you have to worry about whether the results you get are reliable or consistent. People are notorious for their inconsistency. We are easily distractible. We get tired of doing repetitive tasks. We daydream. We misinterpret.

So how do we determine whether two observers are being consistent in their observations? You probably should establish inter-rater reliability outside of the context of the measurement in your study. After all, if you use data from your study to establish reliability, and you find that reliability is low, you're kind of stuck. Probably it's best to do this as a side study or pilot study. And, if your study goes on for a long time, you may want to reestablish inter-rater reliability from time to time to assure that your raters aren't changing.

There are two major ways to actually estimate inter-rater reliability. If your measurement consists of categories -- the raters are checking off which category each observation falls in -- you can calculate the percent of agreement between the raters. For instance, let's say you had 100 observations that were being rated by two raters. For each observation, the rater could check one of three categories. Imagine that on 86 of the 100 observations the raters checked the same category. In this case, the percent of agreement would be 86%. OK, it's a crude measure, but it does give an idea of how much agreement exists, and it works no matter how many categories are used for each observation.

The other major way to estimate inter-rater reliability is appropriate when the measure is a continuous one. There, all you need to do is calculate the correlation between the ratings of the two observers. For instance, they might be rating the overall level of activity in a classroom on a 1-to-7 scale. You could have them give their rating at regular time intervals (e.g., every 30 seconds). The correlation between these ratings would give you an estimate of the reliability or consistency between the raters.

You might think of this type of reliability as "calibrating" the observers. There are other things you could do to encourage reliability between observers, even if you don't estimate it. For instance, I used to work in a psychiatric unit where every morning a nurse had to do a ten-item rating of each patient on the unit. Of course, we couldn't count on the same nurse being present every day, so we had to find a way to assure that any of the nurses would give comparable ratings. The way we did it was to hold weekly "calibration" meetings where we would have all of the nurses ratings for several patients and discuss why they chose the specific values they did. If there were disagreements, the nurses would discuss them and attempt to come up with rules for deciding when they would give a "3" or a "4" for a rating on a specific item. Although this was not an estimate of reliability, it probably went a long way toward improving the reliability between raters.

Test-Retest Reliability

We estimate test-retest reliability when we administer the same test to the same sample on two different occasions. This approach assumes that there is no substantial change in the construct being measured between the two occasions. The amount of time allowed between measures is critical. We know that if we measure the same thing twice that the correlation between the two observations will depend in part by how much time elapses between the two measurement occasions. The shorter the time gap, the higher the correlation; the longer the time gap, the lower the correlation. This is because the two observations are related over time -- the closer in time we get the more similar the factors that contribute to error. Since this correlation is the test-retest estimate of reliability, you can obtain considerably different estimates depending on the interval.


Parallel-Forms Reliability

In parallel forms reliability you first have to create two parallel forms. One way to accomplish this is to create a large set of questions that address the same construct and then randomly divide the questions into two sets. You administer both instruments to the same sample of people. The correlation between the two parallel forms is the estimate of reliability. One major problem with this approach is that you have to be able to generate lots of items that reflect the same construct. This is often no easy feat. Furthermore, this approach makes the assumption that the randomly divided halves are parallel or equivalent. Even by chance this will sometimes not be the case. The parallel forms approach is very similar to the split-half reliability described below. The major difference is that parallel forms are constructed so that the two forms can be used independent of each other and considered equivalent measures. For instance, we might be concerned about a testing threat to internal validity. If we use Form A for the pretest and Form B for the posttest, we minimize that problem. it would even be better if we randomly assign individuals to receive Form A or B on the pretest and then switch them on the posttest. With split-half reliability we have an instrument that we wish to use as a single measurement instrument and only develop randomly split halves for purposes of estimating reliability.


Internal Consistency Reliability

In internal consistency reliability estimation we use our single measurement instrument administered to a group of people on one occasion to estimate reliability. In effect we judge the reliability of the instrument by estimating how well the items that reflect the same construct yield similar results. We are looking at how consistent the results are for different items for the same construct within the measure. There are a wide variety of internal consistency measures that can be used.

Average Inter-item Correlation

The average inter-item correlation uses all of the items on our instrument that are designed to measure the same construct. We first compute the correlation between each pair of items, as illustrated in the figure. For example, if we have six items we will have 15 different item pairings (i.e., 15 correlations). The average interitem correlation is simply the average or mean of all these correlations. In the example, we find an average inter-item correlation of .90 with the individual correlations ranging from .84 to .95.


Average Itemtotal Correlation

This approach also uses the inter-item correlations. In addition, we compute a total score for the six items and use that as a seventh variable in the analysis. The figure shows the six item-to-total correlations at the bottom of the correlation matrix. They range from .82 to .88 in this sample analysis, with the average of these at .85.


Split-Half Reliability

In split-half reliability we randomly divide all items that purport to measure the same construct into two sets. We administer the entire instrument to a sample of people and calculate the total score for each randomly divided half. the split-half reliability estimate, as shown in the figure, is simply the correlation between these two total scores. In the example it is .87.


Cronbach's Alpha (a)

Imagine that we compute one split-half reliability and then randomly divide the items into another set of split halves and recompute, and keep doing this until we have computed all possible split half estimates of reliability. Cronbach's Alpha is mathematically equivalent to the average of all possible split-half estimates, although that's not how we compute it. Notice that when I say we compute all possible split-half estimates, I don't mean that each time we go an measure a new sample! That would take forever. Instead, we calculate all split-half estimates from the same sample. Because we measured all of our sample on each of the six items, all we have to do is have the computer analysis do the random subsets of items and compute the resulting correlations. The figure shows several of the split-half estimates for our six item example and lists them as SH with a subscript. Just keep in mind that although Cronbach's Alpha is equivalent to the average of all possible split half correlations we would never actually calculate it that way. Some clever mathematician (Cronbach, I presume!) figured out a way to get the mathematical equivalent a lot more quickly.


Comparison of Reliability Estimators

Each of the reliability estimators has certain advantages and disadvantages. Inter-rater reliability is one of the best ways to estimate reliability when your measure is an observation. However, it requires multiple raters or observers. As an alternative, you could look at the correlation of ratings of the same single observer repeated on two different occasions. For example, let's say you collected videotapes of child-mother interactions and had a rater code the videos for how often the mother smiled at the child. To establish inter-rater reliability you could take a sample of videos and have two raters code them independently. To estimate test-retest reliability you could have a single rater code the same videos on two different occasions. You might use the inter-rater approach especially if you were interested in using a team of raters and you wanted to establish that they yielded consistent results. If you get a suitably high inter-rater reliability you could then justify allowing them to work independently on coding different videos. You might use the test-retest approach when you only have a single rater and don't want to train any others. On the other hand, in some studies it is reasonable to do both to help establish the reliability of the raters or observers.

The parallel forms estimator is typically only used in situations where you intend to use the two forms as alternate measures of the same thing. Both the parallel forms and all of the internal consistency estimators have one major constraint -- you have to have multiple items designed to measure the same construct. This is relatively easy to achieve in certain contexts like achievement testing (it's easy, for instance, to construct lots of similar addition problems for a math test), but for more complex or subjective constructs this can be a real challenge. If you do have lots of items, Cronbach's Alpha tends to be the most frequently used estimate of internal consistency.

The test-retest estimator is especially feasible in most experimental and quasi-experimental designs that use a no-treatment control group. In these designs you always have a control group that is measured on two occasions (pretest and posttest). the main problem with this approach is that you don't have any information about reliability until you collect the posttest and, if the reliability estimate is low, you're pretty much sunk.

Each of the reliability estimators will give a different value for reliability. In general, the test-retest and inter-rater reliability estimates will be lower in value than the parallel forms and internal consistency ones because they involve measuring at different times or with different raters. Since reliability estimates are often used in statistical analyses of quasi-experimental designs (e.g., the analysis of the nonequivalent group design), the fact that different estimates can differ considerably makes the analysis even more complex.


SOURCE

If you liked this post, Dont forget to BOOKMARK it for others as well. Please CLICK your favorite SOCIAL BOOKMARKING ELEMENT:

StumpleUpon Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google

Sunday, February 20, 2011

Insert Tick Symbol in Microsoft Office (Excel, Word, PowerPoint)

To add a Tick symbol in MS Office Excel, Word or PowerPoint , you need to go to INSERT > SYMBOL from the toolbar and select the Font Wingdings, in the character code box Enter number 252

Here you will get both the simple Tick and a Ticked box symbol.

Cross Symbol and Crossed box symbol are also present there and can by inserted if desired

If you liked this post, Dont forget to BOOKMARK it for others as well. Please CLICK your favorite SOCIAL BOOKMARKING ELEMENT:

StumpleUpon Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google

Negotiate your salary demands smartly

When you begin your job hunt and successfully pass the various stages of an interview to the point when you are asked to quote the expected salary, most of the job seekers get confused and end up asking for too little or too much. While it is true that a job opportunity is seen as a career growth opportunity for a brighter future, it is also a fact that we wish to receive the pay which is according to our skills and experience level. A job opportunity with a good salary package is bound to attract the best candidates from the crop as job seekers are naturally interested in their financial well being as well as professional growth.

Many job seekers have no idea how they should negotiate the salary demands and often end up getting far less than what they had been hoping for. Smart salary negotiation is an art which can be learnt by practicing and researching. However, by following some of the useful tips given below, you can easily make sure that you get the salary package that you deserve.


 

Important Tips for Salary Negotiations:

Given below are some of the most important tips and techniques that can help you in negotiating the salary package with a prospective employee successfully.


 

Illustrate Your Experience:

The most important thing that the employers consider when they propose a salary package to any prospective employee is the experience that he has gained working in the field. You write down the relevant experience in the resume but just putting it there will not do the trick. When you are being interviewed, the employer needs to see that you have the skills and the experience necessary to cope with the job requirements. You should prepare some examples of problems which arose at your last job and how you used your skills to handle the situation effectively. Showing the employer that you have the relevant experience can go a long way in getting you the job.


 

Provide Statistics:

When you list your accomplishments in the CV and mention them during the interview, providing actual numbers and statistics can help in impressing the employer. For instance, if you have been working in the sales or marketing department of a company, rather than saying that you contributed towards increasing the sales of the company, you should mention the exact figure saying you increased the sales by 10 or 15 % within a given time period. This makes a better impression on the employer and increases your chances of getting a higher salary package.


 

Don't Ask About The Salary:

One of the most common mistakes made by the job seekers is that they ask about the salary themselves as the interview approaches the end. You should always leave it up to the employer to inquire about the expected salary and then quote what you have in mind. Even if the issue of salary does not come up during the first interview, it is better to leave it off for the second interview. Since by that time the employer has made it clear that you have been shortlisted for the new position, you have the upper hand in negotiating the salary package.


 

Do Your Research:

When you are negotiating the expected salary package, you should know the average salary for the position and your level of experience. This will help you in quoting a more acceptable and reasonable salary package which will most probably be accepted by the employer as well. You can ask our friends and research on the internet to get access to the average salary in accordance to your position.


 

Know The Lower Limit:

When you quote a salary package to the employer, they are bound to negotiate and try to bring your demand down. You need to know what the lowest expected salary range is for the job that you are interviewing for. By knowing the lower limit, you can maintain your stance on a specific amount and make sure that you do not settle for anything less than that.


 

Keep Personal Stuff To Yourself:

When you enter the salary negotiations with a prospective employer, make sure that you base your negotiations on your capabilities and experience. Citing personal and financial problems as a reason for demanding a certain salary package is highly unprofessional and endangers your chances of getting the job. No matter how much you need the job; there is no reason to share the personal problems with the employer.


 

Don't Panic:

One of the most important things when negotiating the salary package is to remain calm and controlled. When you panic or become agitated, you lose the upper hand and end up making the wrong decision. Even when you are negotiating with the employer on the salary package, you are being judged on your ability to handle pressure situations, so panicking can be the biggest mistake you make during an interview.


 

Look For Benefits:

Even if your prospective employer does not meet your salary expectations, you can inquire about certain health benefits such as health insurance etc. By getting some benefits other than cash compensation, you can actually lessen your financial burden which might make the lower salary package acceptable.


 

Be Prepared For Tough Questions:

When you ask for a salary package, be prepared to answer tough questions such as, "Why do you think you should be paid so much money" etc. Do not be offended or hurt by such questions and try to answer them as calmly as possible and prove that you are the best person for the job and hence deserve the raise.


 

Conclusion:

There are no hard and fast rules on how one should negotiate the salary package but when you follow the important tips given in the article, you can maximize your chance of getting the salary according to your wishes.

If you liked this post, Dont forget to BOOKMARK it for others as well. Please CLICK your favorite SOCIAL BOOKMARKING ELEMENT:

StumpleUpon Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google

Friday, February 18, 2011

Inventory of Assessment Resources

Source : Macalester College

Terms and Definitions

Assessment

Over the years, assessment has been used to describe either a process toward improvement, or a process toward accountability, sometimes both. At Macalester, our goal for assessment is the continual improvement in the quality of the curricular and co-curricular programming offered by the College to its students. Following are a few descriptions of assessment that we've found especially useful:

"Assessment is the process of gathering and discussing information from multiple and diverse sources in order to develop a deep understanding of what students know, understand, and can do with their knowledge as a result of their educational experiences; the process culminates when assessment results are used to improve subsequent learning." (Huba and Freed p. 8)

"…A systemic and systematic process of examining student work against our standards of judgment, it enables us to determine the fit between what we expect our students to be able to demonstrate or represent and what they actually do demonstrate or represent at points along their educational careers. Beyond its role of ascertaining what students learn in individual courses, assessment, as a collective institutional process of inquiry, examines students' learning over time. It explores multiple sources of evidence that enable us to draw inferences about how students make meaning based on our educational practices." (Maki, p. 2)

"Assessment is more than the collection of data. To make assessment work, educators must be purposeful about the information they collect. As a basis for data gathering, they must clarify their goals and objectives for learning and be aware of where these goals and objectives are addressed in the curriculum…Hutchings and Marchese suggest that the meaning of assessment is captured best by its key questions. What should college graduates know, be able to do, and value? Have the graduates of our institutions acquired this learning? What, in fact, are the contributions of the institution and its programs to student growth? How can student learning be improved? (Palomba and Banta, p. 4)


Direct v. Indirect Assessment

There are a variety of tools that may be used for different types of assessment. It is important that the tools selected are a good "fit" with the goals and objectives of the intended learning outcomes. Ideally, emphasis should be placed on direct assessment methods. However, to complement the range of learning that takes place, it is recommended that a well-balanced assessment plan include a variety of assessment methods. This section will introduce some basic terms and types of assessment.

Direct Assessments

Direct assessments provide students the opportunity to show what they know. They "…prompt students to represent or demonstrate their learning or produce work so that observers can assess how well students' texts or responses fit institutional or program-level expectations." (Maki, p. 88)

The use of direct assessment requires clear objectives, and a set of criteria by which the work will be evaluated. (Walvoord p. 13) The use of rubrics is recommended to help define the expectations for a given task, and to aid in the process of comparing work over time. Examples of rubrics are available in Assessment Examples and Resources.

Examples of direct assessment:

  • Capstone experience. As noted in the Macalester College catalog, students are required to complete a capstone experience: "The purpose of the capstone requirement is to give students experience with reading original research literature, doing original work, or presenting a performance. The requirement may be met in many ways, e.g. senior seminar, independent project, or honors project. The means of completing this experience are designated by the departments..." The capstone experience typically culminates in a significant effort such as a major research paper/project with oral presentation, often including peer and faculty review, presenting a paper at a conference, etc.
  • MLA (Macalester Learning Assessment)
  • CLA (Collegiate Learning Assessment)
  • Portfolios, or e-portfolios, are collections of student work over a period of time. A portfolio may include a student's work from the beginning of their college career to graduation, or any other portion of that time, such as within a particular class or department.
  • Internal and external juried reviews (e.g. speeches, recitals, performances in the arts, exhibitions or colloquia)
  • Oral exams
  • Individual or group projects (peer evaluations and/or faculty or staff rating)
  • Public presentations
  • One-minute paper
  • Embedded assignments, such as test questions or essay questions embedded in the course
  • National-testing within a discipline or licensure exams (e.g. Major Field Achievement Test)
  • Evaluations completed by an internship supervisor

Please see Assessment Examples and Resources for descriptions and specific examples of direct assessment tools.

Indirect Assessments

In addition to something students are able to "produce" as a result of their learning experiences, value is also placed on students' perceptions of this experience.

Indirect assessments "…capture students' perceptions of their learning and the educational environment that supports that learning, such as access to and the quality of services, programs, or educational offerings that support their learning…By themselves, results of indirect methods cannot substitute for the evidence of learning that direct methods provide. They can, however, contribute to interpreting the results of direct methods…" (Maki, p. 88, 89)

Examples of indirect assessment:

  • Self-reported student experiences, such as those included in NSSE
  • Satisfaction surveys
  • Alumni surveys
  • Exit interviews with graduates
  • Group discussions
  • Employer surveys

Please see Assessment Examples and Resources for descriptions and specific examples of indirect assessment tools.

Formative v. Summative Assessment

Most methods of indirect or direct assessment may be either formative or summative, depending upon the particular design and when the assessment is introduced. Many grant proposals require a combination of both formative and summative assessment

Formative Assessment

Formative assessment seeks "evidence of learning along the progression of students' studies." (Maki, p. 89) It is used to understand a student's progress or a program's effectiveness in moving toward a goal. It may be thought of as a diagnostic assessment tool, whereby faculty or program managers verify whether or not progress is being made as expected. Because formative assessment is implemented throughout the learning process, faculty or program managers may implement any changes relatively quickly—while the student is still in the class/department/program. One example of formative assessment is the "one-minute paper." The intent of this method is to identify whether students are able to understand the "key takeaway" for a given class period.

Formative assessment may be used to track progress toward successful attainment of learning outcomes, and help to identify whether changes are necessary in order to meet the goals.

Summative Assessment

Summative assessments are used to understand whether a goal has been met. Summative assessment documents achievement of institution-level and program-level learning goals.

Qualitative v. Quantitative

Both qualitative and quantitative methods bring valuable information to light, but they yield different types of results. Review the type of information needed when selecting a method, and also consider whether qualitative and quantitative research could be used together. Please contact the Assessment Office if you would like to brainstorm options.

Qualitative Methods

Qualitative research uses open-ended questions to gain an in-depth understanding of the questions being explored. Common qualitative techniques include focus group discussions, mini-groups, and in-depth interviews either face-to-face or via telephone.

Quantitative Methods

"Quantitative methods are distinguished by their emphasis on numbers, measurement, experimental design, and statistical analysis. Researchers typically work with a small number of predetermined response categories to capture various experiences and perspectives of individuals. Often emphasis is on analyzing a large number of cases using carefully constructed instruments that have been evaluated for their reliability and validity (Patton, 1990). Techniques include questionnaires, structured interviews, and tests." (Palomba and Banta, p. 337)

Sources:

Allen, Mary J. "From Assessment to Academic Excellence: Intentionally Mapping Student Success." 2008 AAC&U General Education and Assessment Conference. February 21, 2008.

Huba, Mary E. and Freed, Jann E. Learner-Centered Assessment on College Campuses: Shifting the Focus from Teaching to Learning. Needham Heights, MA: Allyn and Bacon, 2001.

Maki, Peggy. Assessing for Learning. Sterling, VA: American Association for Higher Education, 2004.

Palomba, Catherine A. and Banta, Trudy W. Assessment Essentials. San Francisco: Jossey-Bass Publishers, 1999.

Schuh, John H. and Upcraft, M. Lee. Assessment Practice in Student Affairs: An Applications Manual. San Francisco: Jossey-Bass, 2001.

Walvoord, Barbara E. Assessment Clear and Simple. San Francisco: Jossey-Bass, 2004.

If you liked this post, Dont forget to BOOKMARK it for others as well. Please CLICK your favorite SOCIAL BOOKMARKING ELEMENT:

StumpleUpon Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google

Glossary of Assessment Terms

Source : American Public University System

Assessment

The systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development. (Palomba & Banta, 1999)

An ongoing process aimed at understanding and improving student learning. It involves making our expectations explicit and public; setting appropriate criteria and standards for learning quality; systematically gathering, analyzing, and interpreting evidence to determine how well performance matches those expectations and standards; and using the resulting information to document, explain, and improve performance. (Angelo, 1995)

Benchmarking

An actual measurement of group performance against an established standard at defined points along the path toward the standard. Subsequent measurements of group performance use the benchmarks to measure progress toward achievement. (New Horizons for Learning)

Bloom's Taxonomy of Cognitive Objectives

Six levels arranged in order of increasing complexity (1=low, 6=high):

  1. Knowledge: Recalling or remembering information without necessarily understanding it. Includes behaviors such as describing, listing, identifying, and labeling.
  2. Comprehension: Understanding learned material and includes behaviors such as explaining, discussing, and interpreting.
  3. Application: The ability to put ideas and concepts to work in solving problems. It includes behaviors such as demonstrating, showing, and making use of information.
  4. Analysis: Breaking down information into its component parts to see interrelationships and ideas. Related behaviors include differentiating, comparing, and categorizing.
  5. Synthesis: The ability to put parts together to form something original. It involves using creativity to compose or design something new.
  6. Evaluation: Judging the value of evidence based on definite criteria. Behaviors related to evaluation include: concluding, criticizing, prioritizing, and recommending. (Bloom, 1956)

Classroom Assessment

The systematic and on-going study of what and how students are learning in a particular classroom; often designed for individual faculty who wish to improve their teaching of a specific course. Classroom assessment differs from tests and other forms of student assessment in that it is aimed at course improvement, rather than at assigning grades. (National Teaching & Learning Forum)

Direct Assessment

Gathers evidence about student learning based on student performance that demonstrates the learning itself. Can be value added, related to standards, qualitative or quantitative, embedded or not, using local or external criteria. Examples are written assignments, classroom assignments, presentations, test results, projects, logs, portfolios, and direct observations. (Leskes, 2002)

Embedded Assessment

A means of gathering information about student learning that is built into and a natural part of the teaching-learning process. Often uses for assessment purposes classroom assignments that are evaluated to assign students a grade. Can assess individual student performance or aggregate the information to provide information about the course or program; can be formative or summative, quantitative or qualitative. Example: as part of a course, expecting each senior to complete a research paper that is graded for content and style, but is also assessed for advanced ability to locate and evaluate Web-based information (as part of a college-wide outcome to demonstrate information literacy). (Leskes, 2002)

Evaluation

The use of assessment findings (evidence/data) to judge program effectiveness; used as a basis for making decisions about program changes or improvement. (Allen, Noel, Rienzi & McMillin, 2002)

Formative Assessment

The gathering of information about student learning-during the progression of a course or program and usually repeatedly-to improve the learning of those students. Example: reading the first lab reports of a class to assess whether some or all students in the group need a lesson on how to make them succinct and informative. (Leskes, 2002)

Indirect Assessment

Acquiring evidence about how students feel about learning and their learning environment rather than actual demonstrations of outcome achievement. Examples include surveys, questionnaires, interviews, focus groups, and reflective essays. (Eder, 137)

Learning Outcomes

Operational statements describing specific student behaviors that evidence the acquisition of desired knowledge, skills, abilities, capacities, attitudes or dispositions. Learning outcomes can be usefully thought of as behavioral criteria for determining whether students are achieving the educational objectives of a program, and, ultimately, whether overall program goals are being successfully met. Outcomes are sometimes treated as synonymous with objectives, though objectives are usually more general statements of what students are expected to achieve in an academic program. (Allen, Noel, Rienzi & McMillin, 2002)

Norm-Referenced Assessment

An assessment where student performance or performances are compared to a larger group. Usually the larger group or "norm group" is a national sample representing a wide and diverse cross-section of students. Students, schools, districts, and even states are compared or rank-ordered in relation to the norm group. The purpose of a norm-referenced assessment is usually to sort students and not to measure achievement towards some criterion of performance.

Performance Criteria

The standards by which student performance is evaluated. Performance criteria help assessors maintain objectivity and provide students with important information about expectations, giving them a target or goal to strive for. (New Horizons for Learning)

Portfolio

A systematic and organized collection of a student's work that exhibits to others the direct evidence of a student's efforts, achievements, and progress over a period of time. The collection should involve the student in selection of its contents, and should include information about the performance criteria, the rubric or criteria for judging merit, and evidence of student self-reflection or evaluation. It should include representative work, providing a documentation of the learner's performance and a basis for evaluation of the student's progress. Portfolios may include a variety of demonstrations of learning and have been gathered in the form of a physical collection of materials, videos, CD-ROMs, reflective journals, etc. (New Horizons for Learning)

Qualitative Assessment

Collects data that does not lend itself to quantitative methods but rather to interpretive criteria. (Leskes, 2002)

Rubric

Specific sets of criteria that clearly define for both student and teacher what a range of acceptable and unacceptable performance looks like. Criteria define descriptors of ability at each level of performance and assign values to each level. Levels referred to are proficiency levels which describe a continuum from excellent to unacceptable product.(System for Adult Basic Education Support)

Standards

Sets a level of accomplishment all students are expected to meet or exceed. Standards do not necessarily imply high quality learning; sometimes the level is a lowest common denominator. Nor do they imply complete standardization in a program; a common minimum level could be achieved by multiple pathways and demonstrated in various ways. (Leskes, 2002)

Summative Assessment

The gathering of information at the conclusion of a course, program, or undergraduate career to improve learning or to meet accountability demands. When used for improvement, impacts the next cohort of students taking the course or program. Example: examining student final exams in a course to see if certain specific areas of the curriculum were understood less well than others. (Leskes, 2002)

Value Added

The increase in learning that occurs during a course, program, or undergraduate education. Can either focus on the individual student (how much better a student can write, for example, at the end than at the beginning) or on a cohort of students (whether senior papers demonstrate more sophisticated writing skills-in the aggregate-than freshmen papers). Requires a baseline measurement for comparison. (Leskes, 2002)


Sources

If you liked this post, Dont forget to BOOKMARK it for others as well. Please CLICK your favorite SOCIAL BOOKMARKING ELEMENT:

StumpleUpon Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google

Common Assessment Terms

Source : Carnegie Mellon University

Assessment for Accountability

The assessment of some unit, such as a department, program or entire institution, which is used to satisfy some group of external stakeholders. Stakeholders might include accreditation agencies, state government, or trustees. Results are often compared across similar units, such as other similar programs and are always summative. An example of assessment for accountability would be ABET accreditation in engineering schools, whereby ABET creates a set of standards that must be met in order for an engineering school to receive ABET accreditation status.

Assessment for Improvement

Assessment activities that are designed to feed the results directly, and ideally, immediately, back into revising the course, program or institution with the goal of improving student learning. Both formative and summative assessment data can be used to guide improvements.

Concept Maps

Concept maps are graphical representations that can be used to reveal how students organize their knowledge about a concept or process.  They include concepts, usually represented in enclosed circles or boxes, and relationships between concepts, indicated by a line connecting two concepts. Example [

Direct Assessment of Learning

Direct assessment is when measures of learning are based on student performance or demonstrates the learning itself.  Scoring performance on tests, term papers, or the execution of lab skills, would all be examples of direct assessment of learning. Direct assessment of learning can occur within a course (e.g., performance on a series of tests) or could occur across courses or years (comparing writing scores from sophomore to senior year).

Embedded Assessment

A means of gathering information about student learning that is integrated into the teaching-learning process. Results can be used to assess individual student performance or they can be aggregated to provide information about the course or program.  can be formative or summative, quantitative or qualitative.  Example: as part of a course, expecting each senior to complete a research paper that is graded for content and style, but is also assessed for advanced ability to locate and evaluate Web-based information (as part of a college-wide outcome to demonstrate information literacy).

External Assessment

Use of criteria (rubric) or an instrument developed by an individual or organization external to the one being assessed. This kind of assessment is usually summative, quantitative, and often high-stakes, such as the SAT or GRE exams.

Formative Assessment

Formative assessment refers to the gathering of information or data about student learning during a course or program that is used to guide improvements in teaching and learning. Formative assessment activities are usually low-stakes or no-stakes; they do not contribute substantially to the final evaluation or grade of the student or may not even be assessed at the individual student level.  For example, posing a question in class and asking for a show of hands in support of different response options would be a formative assessment at the class level.  Observing how many students responded incorrectly would be used to guide further teaching.

High stakes Assessment

The decision to use the results of assessment to set a hurdle that needs to be cleared for completing a program of study, receiving certification, or moving to the next level. Most often, the assessment so used is externally developed, based on set standards, carried out in a secure testing situation, and administered at a single point in time. Examples: at the secondary school level, statewide exams required for graduation; in postgraduate education, the bar exam.

Indirect Assessment of Learning

Indirect assessments use perceptions, reflections or secondary evidence to make inferences about student learning. For example, surveys of employers, students' self-assessments, and admissions to graduate schools are all indirect evidence of learning.

Individual Assessment

Uses the individual student, and his/her learning, as the level of analysis. Can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement. Most of the student assessment conducted in higher education is focused on the individual. Student test scores, improvement in writing during a course, or a student's improvement presentation skills over their undergraduate career are all examples of individual assessment.

Institutional Assessment

Uses the institution as the level of analysis. The assessment can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement or for accountability. Ideally, institution-wide goals and objectives would serve as a basis for the assessment. For example, to measure the institutional goal of developing collaboration skills, an instructor and peer assessment tool could be used to measure how well seniors across the institution work in multi-cultural teams.

Local Assessment

Means and methods that are developed by an institution's faculty based on their teaching approaches, students, and learning goals. An example would be an English Department's construction and use of a writing rubric to assess incoming freshmen's writing samples, which might then be used assign students to appropriate writing courses, or might be compared to senior writing samples to get a measure of value-added.

Program Assessment

Uses the department or program as the level of analysis. Can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement or for accountability.  Ideally, program goals and objectives would serve as a basis for the assessment. Example: How well can senior engineering students apply engineering concepts and skills to solve an engineering problem?  This might be assessed through a capstone project, by combining performance data from multiple senior level courses, collecting ratings from internship employers, etc.  If a goal is to assess value added, some comparison of the performance to newly declared majors would be included.

Qualitative Assessment

Collects data that does not lend itself to quantitative methods but rather to interpretive criteria (see the first example under  "standards").

Quantitative Assessment

Collects data that can be analyzed using quantitative methods (see  "assessment for accountability" for an example).

Rubric

A rubric is a scoring tool that explicitly represents the performance expectations for an assignment or piece of work. A rubric divides the assigned work into component parts and provides clear descriptions of the characteristics of the work associated with each component, at varying levels of mastery. Rubrics can be used for a wide array of assignments: papers, projects, oral presentations, artistic performances, group projects, etc. Rubrics can be used as scoring or grading guides, to provide formative feedback to support and guide ongoing learning efforts, or both.

Standards

Standards refer to an established level of accomplishment that all students are expected to meet or exceed. Standards do not imply standardization of a program or of testing. Performance or learning standards may be met through multiple pathways and demonstrated in various ways.  For example, instruction designed to meet a standard for verbal foreign language competency may include classroom conversations, one-on-one interactions with a TA, or the use of computer software. Assessing competence may be done by carrying on a conversation about daily activities or a common scenario, such as eating in a restaurant, or using a standardized test, using a rubric or grading key to score correct grammar and comprehensible pronunciation.

Summative Assessment

The gathering of information at the conclusion of a course, program, or undergraduate career to improve learning or to meet accountability demands. When used for improvement, impacts the next cohort of students taking the course or program. Examples: examining student final exams in a course to see if certain specific areas of the curriculum were understood less well than others; analyzing senior projects for the ability to integrate across disciplines.

Value Added

The increase in learning that occurs during a course, program, or undergraduate education. Can either focus on the individual student  (how much better a student can write, for example, at the end than at the beginning) or on a cohort of students (whether senior papers demonstrate more sophisticated writing skills-in the aggregate-than freshmen papers). To measure value-added a baseline measurement is needed for comparison. The baseline measure can be from the same sample of students (longitudinal design) or from a different sample (cross-sectional).

Adapted from Assessment Glossary compiled by American Public University System, 2005
http://www.apus.edu/Learning-Outcomes-Assessment/Resources/Glossary/Assessment-Glossary.htm

If you liked this post, Dont forget to BOOKMARK it for others as well. Please CLICK your favorite SOCIAL BOOKMARKING ELEMENT:

StumpleUpon Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google

Wednesday, February 16, 2011

U.S. teacher strikes nerve with 'lazy whiners' blog

By PATRICK WALTERS, Associated Press Patrick Walters, Associated Press

FEASTERVILLE, Pa. – A high school English teacher in suburban Philadelphia who was suspended for a profanity-laced blog in which she called her young charges "disengaged, lazy whiners" is driving a sensation by daring to ask: Why are today's students unmotivated — and what's wrong with calling them out?

As she fights to keep her job at Central Bucks East High School, 30-year-old Natalie Munroe says she had no interest in becoming any sort of educational icon. The blog has been taken down, but its contents can still be found easily online.

Her comments and her suspension by the middle-class school district have clearly touched a nerve, with scores of online commenters applauding her for taking a tough love approach or excoriating her for verbal abuse. Media attention has rained down, and backers have started a Facebook group.

"My students are out of control," Munroe, who has taught 10th, 11th and 12th grades, wrote in one post. "They are rude, disengaged, lazy whiners. They curse, discuss drugs, talk back, argue for grades, complain about everything, fancy themselves entitled to whatever they desire, and are just generally annoying."

And in another post, Munroe — who is more than eight months pregnant — quotes from the musical "Bye Bye Birdie": "Kids! They are disobedient, disrespectful oafs. Noisy, crazy, sloppy, lazy LOAFERS."

She also listed some comments she wished she could post on student evaluations, including: "I hear the trash company is hiring"; "I called out sick a couple of days just to avoid your son"; and "Just as bad as his sibling. Don't you know how to raise kids?"

Munroe did not use her full name or identify her students or school in the blog, which she started in August 2009 for friends and family. Last week, she said, students brought it to the attention of the school, which suspended her with pay.

"They get angry when you ask them to think or be creative," Munroe said of her students in an interview with The Associated Press on Tuesday. "The students are not being held accountable."

Munroe pointed out that she also said positive things, but she acknowledges that she did write some things out of frustration — and of a feeling that many kids today are being given a free pass at school and at home.

"Parents are more trying to be their kids' friends and less trying to be their parent," Munroe said, also noting students' lack of patience. "They want everything right now. They want it yesterday."

One of Munroe's former students, who now attends McDaniel College in Westminster, Md., said he was torn by his former teacher's comments. Jeff Shoolbraid said that he thought much of what Munroe said was true and that she had a right to voice her opinion, but felt her comments were out of line for a teacher.

"Whatever influenced her to say what she did is evidence as to why she simply should not teach," Shoolbraid wrote in an e-mail to the AP. "I just thought it was completely inappropriate."

He continued: "As far as motivated high school students, she's completely correct. High school kids don't want to do anything. ... It's a teacher's job, however, to give students the motivation to learn."

A spokesman for the Pennsylvania State Education Association declined to comment Tuesday because he said the group may represent Munroe. Messages left for the Central Bucks School District superintendent were not returned.

Sandi Jacobs, vice president of the National Council on Teacher Quality, said school districts are navigating uncharted territory when it comes to teachers' online behavior. Often, districts want teachers to have more contact with students and their families, yet give little guidance on how teachers should behave online even as students are more plugged in than they've ever been.

"This is really murky stuff," she said. "When you have a teacher using their blog to berate their students, maybe that's a little less murky. But the larger issue is, I think, districts are totally unprepared to deal with this."

Munroe has hired an attorney, who said that she had the right to post her thoughts on the blog and that it's a free speech issue. The attorney, Steven Rovner, said the district has led Munroe to believe that she will eventually lose her job.

"She could have been any person, any teacher in America writing about their lives," he said, pointing out that Munroe blogged about 85 times and that only about 15 to 20 of the posts involved her being a teacher. "It's honest and raw and a little edgy depending on your taste. ... She has a deep frustration for the educational system in America."

Rovner said that he would consider legal action if indeed Munroe loses her job.

"She did it as carefully as she could," he said about her blog. "It's so general that it applies to the problems in school districts and schools across the country."

__

Associated Press writer Dorie Turner in Atlanta contributed to this report.

If you liked this post, Dont forget to BOOKMARK it for others as well. Please CLICK your favorite SOCIAL BOOKMARKING ELEMENT:

StumpleUpon Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google

Wednesday, February 9, 2011

Program Assessment Toolkit

Developed by Central i University (CMU)

The toolkit includes:

Steps for Developing a Program Assessment Plan Click here for a guide to assist with developing an assessment plan.


Assessment: "A Guide"

Visualize the "big picture" of outcomes-based assessment, click here.

"Organize Your Thoughts": Assessment Plan Worksheet
Click here for a worksheet to assist you with connecting your program mission, student learning outcomes and assessment strategies.

"Organize Your Thoughts": Identifying Strategies Use this worksheet to assist you with identifying possible links between other learning opportunities and how they may be an outstanding strategy for your program assessment plan, click here.

Why care about assessment of student learning? Click here to discover why assessing student learning is a good thing.

What are the domains of student learning? Click here for a blueprint of the cognitive, psychomotor, and affective domains and how to use them to describe what you intend for students to learn in your program.

Determining student learning outcomes
- How do you take a blueprint (program mission) and build it into something useful (student learning outcomes)? Click here to find out.

Writing student learning outcomes for CMU programs - Click here for the right tool to help you write student learning outcomes.

Formative and summative assessment
- Click here to hammer home the differences between formative and summative assessment.

Developing rubrics - Click here to follow step-by-step instructions for constructing a rubric.

Using survey methods in student learning outcomes assessment - Click here for appropriate use and development of surveys.

Course-embedded assessment
- Click here for information on how course assessments can serve as the building blocks for an assessment system.

Portfolio assessment
- Click here for information on this assessment tool that serves numerous purposes.

Using multiple measures in student learning outcomes assessment
- Click here to get help with developing a big picture of different assessment options for your program.

What are you already doing that can be used for student learning outcomes assessment? Click here to discover what you are already doing in student learning outcomes assessment.

Applied experiences
- Click here for information on applied experiences (e.g. internships, student teaching) and the tools used to evaluate these experiences.

Using capstone experiences in student learning outcomes assessment - What is a capstone experience and how do I assess student learning as a part of it? Click here to find the answer.

Extracurricular learning and assessment - Do you need more than one tool to do the job? Click here to find out how the work you do with students in clubs and organizations can help assess student learning outcomes.

Assessment in graduate programs - Click here for questions to consider before building an assessment plan for your graduate program.

Authors of the Toolkit

SOURCE

If you liked this post, Dont forget to BOOKMARK it for others as well. Please CLICK your favorite SOCIAL BOOKMARKING ELEMENT:

StumpleUpon Ma.gnolia DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google

Contact Me or Subscribe to my posts

Click to Join my FaceBook Blog Group Page

If you want to send a quick message to me, please click

To Subscribe to my posts, please choose:

Search my Blog for posts that are of interest to you...results will be displayed below

Custom Search

Here are the Results, if you seached for a post

Blog Archive

About Me

My photo
Dubai, DXB, United Arab Emirates

Washington, USA

Western Europe Time (GMT)

Dubai

Pakistan

Australia