Math toolkit for real-time programming pdf

Date published 

 

Math Toolkit for Real-Time Development Jack W. Crenshaw CMP Books Lawrence, Kansas The programs in this book are presented for instructional value. Math Toolkit for Real-Time Programming~tqw~aracer.mobi - Ebook download as PDF File .pdf), Text File .txt) or read book online. download Math Toolkit for Real-Time Programming on aracer.mobi ✓ FREE SHIPPING on qualified orders.

Author:RAMIRO ANGALICH
Language:English, Spanish, Hindi
Country:Korea South
Genre:Biography
Pages:126
Published (Last):11.11.2015
ISBN:315-6-55219-207-8
Distribution:Free* [*Registration needed]
Uploaded by: SACHIKO

69202 downloads 162828 Views 34.83MB PDF Size Report


Math Toolkit For Real-time Programming Pdf

By Jack Crenshaw. Do significant math on small machines Write speedy and actual library services grasp analytical and numerical calculus. [BOOKS] Math toolkit for real-time programming by Jack Crenshaw. Book file PDF easily for everyone and every device. You can download and read online. book out called "Math Tool Kit for Real-Time Programming" (Lawrence, KS: CMP . Could I trouble you to forward the PDF to me via E-mail?.

Math toolkit for real-time programming Jack Crenshaw Publisher: CMP Do big math on small machines Write fast and accurate library functions Master analytical and numerical calculus Perform numerical integration to any order Implement z-transform formulas Need to learn the ins and outs of the fundamental math functions in Master analytical and numerical calculus with this solid course in applied math from the renowned columnist of Embedded Systems Programming magazine. You will learn how to do big math on small machines with fast and accurate library functions, numerical integration to any order and z-transform formulas. Features never-before-published methods and a versatile set of algorithms to use in your own projects. About the Author Jack Crenshaw holds a Ph. He wrote his first computer program in and his first microcomputer software a real-time, floating-point, Kalman filter-driven controller in He has been working with real-time software for embedded systems ever since, and thinks he might be beginning to get the hang of it. He is currently a senior principal design engineer for Alliant TechSystems, Inc.

If you have a special case where underflow really does indicate an error, your program needs special treatment. Dealing with Errors Belt and Suspenders As the underflow example indicates, the typical error-handling mechanism of the library routines is unacceptable for embedded systems. In other words, you must trap the errors before they get to the library routines and deal with them in ways appropriate for real-time systems.

The only question, then, is whether the program should report the error. There are two or three schools of thought on the way to handle such errors. Some programmers like to have each routine report its own error to the console.

The Unix functions tend to work this way. Others feel that library routines should never produce output but, at the very most, should return an error code for the caller to handle. For real-time embedded systems, my typical approach is the same as for the underflow handler: You may think a negative argument is a serious problem — one that implies a bug in the program, and therefore one that needs serious action.

But suppose the number should have been 0, but merely came out as a very small negative number because of round-off error. In such a case, crying wolf is inappropriate. The program is working properly, and the function should simply return a result of zero.

No error message is written. This code is part of my math library, jmath. Listing 3. Every function in a C program needs a unique name. Variable names and functions are considered different if they have different combinations of upperand lowercase characters.

When I started writing safe functions, I wondered what I should call them. Sometimes I tacked on a leading j for Jack , but that solution always seemed a bit egotistical. The other day, I thought I had the perfect solution. Sqrt instead of sqrt. This seemed like the perfect solution: With this approach, I not only could get more robust behavior, I could also eliminate the hated type-qualifying characters in the function names.

So, originally, I called the function in Listing 3. Imagine my surprise when I tried this on the Microsoft compiler and found that it insisted on calling the library function instead. A quick session with CodeView revealed the awful truth: It was ignoring the case in my function name! Thus, even though the names sqrt and Sqrt are supposed to be unique according to the C rules of engagement, the Microsoft linker misses this subtlety and sees them as the same.

That system gives the library function preference and ignores the user-supplied function. I learned this the hard way when trying to overload abs. Dealing with Errors I should say a word here about multiple floating-point formats.

The Intel math coprocessor uses an internal bit format, and some compilers assign this format to the type called long double. The distinction is important, because the library routines for long doubles have different names.

Remember, the designers of the math library chose not to use overloading. So we have functions like sin, cos, sqrt, and atan, for double precision numbers, and their long double equivalents, sinl, cosl, sqrtl, and atanl.

If you choose to do your arithmetic in long double format, be aware that you must call the long double function. Otherwise, the compiler will truncate your carefully computed, high-precision long double number to double, call the double library function, and convert the now incorrect, or at least inaccurate value back again. At one time, my own library file, jmath. For example, I had the three functions: For starters, the first function is virtually useless.

The float version still performs a type-conversion because sqrt is a double-precision function. The third function does at least make sure that the correct function is called, and the number is not truncated. If you have a compiler that really, truly uses long doubles, you may want to seriously consider including overloadings for this type. However, my interest in long-double precision waned quickly when I learned that the compilers I use for bit applications do not support long double anymore.

The bit compilers did, but the bit ones did not. I ran into trouble with type conversions; not in the code of jmath. But whenever I did that, I kept getting compiler errors warning me that constants were being truncated.

In the end, long double arithmetic proved to be more trouble than it was worth, at least to me. So the code in the remainder of this book assumes double-precision arithmetic in all cases. This allowed me to simplify things quite a bit, albeit with a considerable loss of generality. They are about as readable as Sanskrit. I also have never been comfortable with the idea of sticking executable code in header files, which is where many compilers insist that templates be put.

If the number computed really is a large negative number, it probably means that something is seriously wrong with the program. If the application program is truly robust and thoroughly tested, the condition should never occur. Be warned, 32 Chapter 3: Functions that can only deal with limited ranges of input parameters should be guarded by tests to assure the input is within bounds.

Examples are the inverse sine and cosine. Similar problems can occur with functions that can produce infinite results, like the tangent or hyperbolic functions. Safe versions of these functions are shown in Listing 3. Note that the argument limit for the argument of the exponential is extremely conservative.

In the case of the four-quadrant arctangent, the library function atan2 gives reasonable and accurate answers except for the special case where both arguments are zero. In practice, this should never happen, but if it does, we should still return a reasonable value rather than crash. The factorial function is not shown because you should never be using it. Proper coding can and should eliminate the need for it. They are wonderful way to deal with exceptional situations without cluttering up the mainstream code with a lot of tests.

Most implementations generate very, very inefficient code — far too inefficient to be useful in a real-time program. Dealing with Errors Exception Exemption For those not familiar with the term, an exception in a programming language is something that happens outside the linear, sequential processing normally associated with software. The general idea is that, in the normal course of processing, some error may occur that is abnormal; it could be something really serious, like a divide-by-zero error — usually cause for terminating the program.

Or, it could be something that happens under normal processing, but only infrequently, like a checksum error in a data transfer. In any case, the one thing we cannot allow is for the program to crash. The general idea behind exception handling is that when and where an exception occurs, the person best able to make an informed decision as to how to handle it is the programmer, who wrote the code and understands the context.

Only if it reaches the top of the program, still unhandled, do we let the default action occur. One point of debate is the question: Some languages allow processing to resume right after the point where the exception occurred; others abort the current function. This is an issue of great importance, and has a profound impact upon the usability of exceptions in real-world programs. The efficiency issue hinges on the question: There are plenty of situations in embedded programs where the first definition makes sense; for example, protocol errors in a serial communication device.

This is definitely a situation that must be dealt with, but it must be dealt with efficiently, and then the program must continue to operate normally. Unfortunately, I fear that most compiler writers see the exception in light of the second viewpoint: Because they see them this way, they see no reason to implement them efficiently. I mean, if the program is going down, why care how long it takes to do so?

I say ironically because BASIC is hardly what one usually thinks of as a candidate for embedded systems programming. Another language that supports exceptions is Ada. In fact, to my knowledge Ada was the first language to make exceptions an integral part of the language.

Math Toolkit for Real-Time Programming~tqw~_darksiderg.pdf

Trivia question: What was the second? At least the earlier Ada compilers suffered from the exception-as-fatal-error syndrome, and their writers implemented exceptions so inefficiently that they were impractical to use. In most Ada shops, in fact, the use of them was verboten. The Functions that Time Forgot The math library functions typically supplied in current programming languages include sine, cosine, arctangent, exponential, and log.

But other related functions need to be supported, too. Dealing with Errors cosine, or hyperbolic functions. The routines shown in Listing 3. The safe-function tests are built-in. The tangent function is defined as [3. If you blindly implement Equation [3. It should not be the maximum floating-point number, for a very good reason: In Listing 3. This is the same kind of problem I encountered with the square root, and I can solve it the same way by limiting the argument to a safe range.

Safe versions of the arcsine and arccosine are included in Listing 3. Forget it. The problem is not with the math, but with the geometry. The sine function has a maximum value of 1. It is inherently safe, because we test the value of x before we do anything. You only need the safe version of Listing 3. An equation similar to Equation [3. A far better approach is also a simpler one: Dealing with Errors four quadrants.

Typically, programmers have to use other information [like the sign of the cosine if asin is called] to decide externally if the returned value should be adjusted. The four-quadrant arctangent function solves all these problems by taking two arguments, which represent or are at least proportional to, the cosine and the sine of the desired angle in that order. From the two arguments, it can figure out in which quadrant the result should be placed with no further help from the programmer.

Why is the four-quadrant arctangent so valuable? Simply because there are so many problems in which the resulting angle can come out in any of the four quadrants. Often, in such cases, the math associated with the problem makes it easier to compute the sine and the cosine separately, rather than the tangent. We could then use an arcsine, say, and place the angle in the right quadrant by examining the sign of the cosine what a tongue-twister!

But the four-quadrant arctangent function does it so much more cleanly. Actually, both atan2 and atan do a lot more than that. To avoid inaccuracies in the result when the argument is small, they use different algorithms in different situations. Note carefully the order: Note also that it is not necessary to normalize s and c to the range —1 to 1; as long as they are proportional, the function will take care of the rest. Doing it in Four Quadrants 39 The library function atan2 works as advertised if either argument is 0, but not both.

To take care of that one exceptional case, you must test for the condition and return a reasonable answer. Because the arguments provide no information at all about what the angle should be, you can return any value you choose.

Zero seems as good a choice as any. The first assumes that a library function atan2 is available. The second is for those unfortunate few who must roll their own. None of them were nearly as clean as the straightforward approach taken in the final version. You never pay much attention to it until it turns up missing. I mean, even a three dollar calculator has a square root key, right?

So how hard could it be? Suddenly, the problem begins to loom larger. Because the fundamental functions are … well … fundamental, they tend to get used quite a few times in a given program.

Get them wrong, or write them inefficiently, and performance is going to suffer. Most engineering or mathematical handbooks will describe the basic algorithms, but sometimes the path from algorithm to finished code can be a hazardous one.

Square Root If you implement the algorithm blindly, without understanding its capabilities and limitations, you could end up in big trouble. I once wrote a realtime program in assembly language for a 10MHz Z Among many other calculations, it had to compute a sine, cosine, square root, and arctangent, all within microseconds.

Believe me, getting those functions right was important! The whole thrust of this book is the reduction of math theory to practical, useful software suitable for embedded applications. In this chapter, I will begin to look at the fundamental functions, beginning with the square root. Subsequent chapters will address other fundamental functions. The answer is always the same: The equation is to be applied repetitively, beginning with some initial guess, x0, for the root.

At each step, you use this value to calculate an improved estimate of x. At some point, you decide the solution is good enough, and the process terminates. Note that the formula involves the average of two numbers: You can take this lesson, one of the best lessons I learned from my training in physics, to the bank: I can write the true root as the sum of the initial estimate plus an error term.

This means that the error d is hopefully small. If this is true, its square is smaller yet and can be safely ignored. Equation [4. The squiggly equals sign shows that this is an approximation to x. It is not exact, because I left out the d2 term in the derivation. This, in turn, implies that I must keep applying the formula until the function converges on a satisfactory solution. This process is called iteration. Answering the last question first, the theoretical answer is, forever.

Each iteration gives us, at best, only an approximation to the root. In practice, however, I can get arbitrarily close. I will represent the root to its closest possible value, which is all I can ever hope for, and the process will converge on the true root, for all practical purposes.

Table 4. Error 0. First, you can see that convergence is rapid, even with a bad initial guess. In the vicinity of the root, convergence is quadratic, which means that you double the number of correct bits at each iteration note how the number of zeros in the error doubles in steps 3 through 6. That seems to imply that even for a bit result, you should need a maximum of six iterations to converge.

Not bad. Perhaps most important, at least in this example, is that the process never converges. As you can see, an oscillation starts at step 7 because of round-off error in my calculator.

As you can see, this is bad advice. Try it with your system and use it if it works. But what, exactly, does that mean, and what are the implications? Square Root Trial Guess 6 7 8 3. Because the value of a is completely negligible compared to the square of x, the formula in this case reduces to [4. A method that takes iterations is not going to be very welcome in a real-time system. But what if you start with 1.

From Equation [4. It is amazing that such a seemingly innocent formula can lead to so many problems. The Convergence Criterion One way to decide when to quit is to look at the error. Instead of solving directly for x using Equation [4. You can compute the relative error RE , 49 [4. In fact, the fastest and simplest, way of deciding when to stop is often simply to take a fixed number of iterations. A method that iterates a fixed number of times is also more acceptable in a real-time system. The best way to limit the number of iterations is simply by starting with a good initial guess.

The Initial Guess Arriving at an initial guess is tough because the range of floating-point numbers is so wide. When the input number was 1. This will be true whenever the input number is 1. In fact, you can do it in most languages, although the algorithm will not be portable, being dependent on the internal floatingpoint format. Floating-Point Number Formats Before I go any further, I should explain some features of floating-point number formats.

The purpose of using floating-point numbers is basically to give a wider dynamic range. A bit integer, for example, can hold nonzero positive numbers from 1 through 32,, a range of , or, if you prefer, 90dB. A bit integer can hold numbers up to , giving a dynamic range of dB. The floating-point number gets its dynamic range by splitting the number into two parts: You can think of the exponent as defining the general range of the number large or small , whereas the mantissa is the value within that range.

The integer part of a logarithm defines the size of a number as a power of The typical floating-point number retains the idea of storing the integral part of the log to define a power of two not 10 , but it stores the fractional part as a true fraction, not as its logarithm. This fractional part is called the mantissa.

Allocating the bits of the word to the various parts involves a trade-off. The more bits set aside for the exponent, the greater the dynamic range but the fewer bits remaining to store the mantissa and, therefore, the lower the precision. You can estimate the number of decimal digits of accuracy by counting the number of bits in the mantissa; because eight is roughly the same size as 10, figure three bits per digit of accuracy. When you store an integer as a bit number, you can get nearly 10 digits of accuracy.

In a 32bit floating-point number, however, the accuracy is barely seven digits. To maintain the greatest possible accuracy, floating-point numbers are normalized, shifting the mantissa as far left as possible to include as many bits after the decimal point as possible. This assures that we have the greatest number of significant bits. In most cases but not all the mantissa is shifted until its leftmost bit is a 1.

This fact can be used to gain a bit of precision: You can simply imagine it to be there and only store the bits one level down. This concept is called the phantom bit. Note that the mantissa can never be greater than or equal to 1. Furthermore, if the high bit is always 1, it can never be less than 0. Thus, normalization forces the mantissa to the range to be between 0.

Math toolkit for real-time programming

You need to know one more fact about the exponent field. To keep the exponent positive, I add 0x40 to all exponents, so they now cover the range 0 to 7f. This concept is called the split on nn convention, and as far as I know, almost every industrial-strength format uses it. A picture is worth a thousand words. Assume seven bits of exponent, sixteen bits of mantissa, and no phantom bit. Numbers in the format might appear as in Table 4. Finally, note that the high bit of 52 Chapter 4: Square Root the mantissa is always set, as advertised.

For this reason, powers of two always have a mantissa of 0x — only the exponent changes. This mantissa is equivalent to 0. The next to last two rows express the smallest and largest number that can be represented in this format.

You might also like: CANADA RICHARD FORD EPUB

To counter this, some designers, including Intel, use an offset that has more headroom on the high side. As you can see, all you do is set the high bit for such numbers.

Back to Your Roots You may be wondering why I chose this particular point to digress and talk about how floating-point numbers are encoded. The reason is simple: There are almost as many floating-point formats as there are compilers; each vendor seemed to use its own until IEEE stepped in and defined the most complicated format humanly possible.

Most compiler vendors and CPU designers try to adhere to the IEEE standard, but most also take liberties with it because the full IEEE standard includes many requirements that affect system performance. Programs based on Intel 80x86 architecture and its math coprocessor allow for three flavors of floating-point notation, as shown in Table 4. Total Bits float 7 sometimes! The float format uses an exponent as a power of four, not two, so the high bit may not always be 1. Rough estimates are shown in Table 4.

To put it into perspective, there are only about atomic particles in the entire Earth, which would almost fit into even the ordinary float format.

org • View topic - Book: Math Toolkit for Real-Time Programming

One can only speculate what situation would need the range of a long double, but in the real world, you can be pretty sure that you need never worry about exponent overflow if you use this format. The less said about the float format the better. It uses a time-dishonored technique first made popular in the less-than-perfect IBM series and stores the exponent as a power of something other than two the Intel format uses powers of four. This allows more dynamic range, but at the cost of a sometimes-there, sometimes-not bit of accuracy.

Think about it. If you have to normalize the mantissa, you must adjust the exponent as well by adding or subtracting 1 to keep the number the same. Conversely, because you can only add integral values to the exponent, you must shift the mantissa by the same factor.

This means that the IBM format which used powers of 16 as the exponent required shifts of four bits at a time. Similarly, the Intel format requires shifts by two bits. The end result: This means you can only count on the accuracy of the worst case situation.

One other point: They also use the phantom bit. Thus, in their double format, 1. Because this leaves me with a mantissa in the range of 0. Listing 4. This code works for the type double. The program of Listing 4. This puts the argument passed to the root finder always in the range 0. Square Root As it stands, Listing 4.

The code shown is intended to be a guide for those of you who are programming in assembly language, or may otherwise, someday, have to write your own square root function. To actually find the root I need another function, shown in Listing 4. Substitute the name of this function at the call to sqrt in Listing 4. Whenever the input is an even power of two, convergence is immediate. Not bad at all for a quick and dirty hack. If you need a floating-point square root in a hurry, here it is.

Take special note that things get a little interesting when I put the exponent and mantissa back together if the original exponent is odd. For the even exponents, I simply divide by two. They prefer to use only even exponents. You can do this, but you pay a price in a wider range for the mantissa, which can then vary by a factor of four, from 0. This leads to one or two extra steps of iteration.

Personally, I prefer the method shown because it gives me fewer iterations, but feel free to experiment for yourself. I could have also set it to some constant value, like 0. Can I do better by choosing an optimal value of the mantissa? Yes, by quite a bit.

Figure 4. As you can see, the solution varies from 0. Obviously, a good initial guess would be something in between. For this range, the value is [4. Square Root Table 4. Trial Guess Error 1 2 3 4 7. Again, the answer is yes. The most obvious is to use a table of starting values based on the value of the mantissa. Using a full table lookup would take up entirely too much space, but you can use the leading few bits of the mantissa to define a short table lookup.

However, the general idea of the table lookup approach is shown in the code fragment below. The Best Guess I said there was a better approach. The key to it can be found in Figure 4. Using a constant value of 0. Surely I can do better than that. Using a table of values, as suggested above, is equivalent to approximating the function by the staircase function shown in Figure 4.

But even better, why not just draw a straight line that approximates the function? As you can see, the actual curve is fairly straight over the region of interest, so a straight line approximation is really a good fit.

The general equation of a straight line is [4. All I have to do is find the best values for the constants A and B. That would make the error one-sided. Square Root extremes. Done properly, this makes the straight line drop below the curve in the middle, which tends to even out the error, plus and minus.

I define this error to be [4. Doing this gives a relation between A and B that turns out to be [4. The function then becomes [4.

To find A, note that there is some point P2 where the curve rises above the straight line to a maximum. This number is my old friend, the geometric mean. Forcing the error at this point to be the same but in the opposite direction as at the extremes, I finally arrive at the optimal value [4. Sure enough, the line [4. With Equation [4. Using this approach, I get convergence as shown in Table 4.

Trial Guess r Error 1 2 3 7. For bit floating-point accuracy equivalent to C float, roughly six digits , convergence is good enough in only one iteration. The long string of zeros above is nice to see, but not always necessary. For a proper perspective, look at the accuracy expressed as a percent error in Table 4.

Iterations Percent Error 0 1 2 0. For many cases e. For that matter, an error of less than 1 percent, which you get after zero iterations, is enough in some cases within a control loop, for example, where the feedback keeps errors from growing. In the Z real-time program I mentioned earlier in this chapter, where I had to compute a lot of functions in a hurry, I needed only one iteration with the square root function by using the solution from the previous cycle as a starting value.

That approach proved to be more than adequate for the problem. Square Root Putting it Together All that remains is to put the concepts together into a callable function, with proper performance over the exponent range.

This is shown in Listing 4. I also have to check for zero as a special case. This routine can serve as a template for a very fast assembly language function. If you need more speed and can tolerate less accuracy, leave out one, two, or even all three of the iterations. Integer Square Roots The algorithm given above for the square root of a floating-point number is hard to beat. But there are times when all you need or want is integer arithmetic.

In many embedded systems, there is no floating-point processor, so 64 Chapter 4: Square Root floating-point arithmetic must be done via software emulation. Surprisingly, the answer is no, not quite. Using integer arithmetic naturally leads to the fast integer operations built into most CPUs, so in most cases doing things with integers will indeed speed things up.

But there are some offsetting factors that make the integer algorithm a little more delicate. There are two problems. The Newton formula translates easily enough.

The integer version of Equation [4. To convince yourself of this, try to find the square root of 65, One of these is just plain wrong. Unlike the floating-point case, I expect integer arithmetic to be exact. A reasonable definition of the square root of an integer x is the largest value whose square does not exceed the input value. By this definition, there is only one acceptable solution: But in this case, applying Equation [4.

This is a terrible result. In the floating-point case, I was always assured that each successive iteration drove me inexorably closer to the root, at least until the result got within the limits of machine precision. In integer arithmetic, the programmer expects every bit to count. An algorithm that can be assured to work is shown in the code fragment below. This seems like a case of redundant code, and so it is. Another problem with the integer algorithm is that it has to cover an incredible range of numbers.

They also sell a Kindle version, so I was wondering if this PDF is somehow transformed from the Kindle version by somebody. If I find that I can use this book then I'll download a paper copy from site, if I can't find it in the nearest University bookshop. site will let you take a good look at the content btw. After a little more looking around, it looks like what might have happened is that someone with an ACM Digital Library or similar subscription might have downloaded it, which they were free to do, but then posted it for others to download, and I suppose the bit-torrent thing makes it hard to trace where it came from so they stay out of trouble.

Although the website is kat. I checked because I wondered if it was some third-world country where it would be easy to get away with distributing things illegally. If anyone finds other info, let us know. Meanwhile, I guess I better edit my first post above. No part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior written permission of the publisher; with the exception that the program listings may be entered, stored, and executed in a computer system, but they may not be reproduced for publication.

Distributed in the U. I went to that location and it wanted me to sign up for something before I could download anything. I decided to pass I don't randomly sign up for things. Great way to increase one's spam intake. Page 1 of 2. Previous topic Next topic. That was over 10 years ago and I forgot all about it. I also see now it was Jack Crenshaw, not Jack Ganssle. I got the mixed up because I kept seeing both of them in Embedded Systems Programming magazine.

See following posts. I'm surprised we're OK with posting links to torrents like that. Tut tut Indeed! This isn't quite like the case of year old ROMs from defunct companies, in my view - a moral rather than legal view.

I looked, and the copyright page does have: No part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior written permission of the publisher; with the exception that the program listings may be entered, stored, and executed in a computer system, but they may not be reproduced for publication and I compared it to the copyright page in the programming manual WDC has online and found it has no such notice.

This is getting stranger by the minute. The copyright page also has: I don't either, and I did not give them any email address or other info, but it seemed like they just wanted me to be able to help distribute it, since that would reduce their upload bandwidth requirements. Users browsing this forum: Google Docs is free for unlimited use. Many tutors find it useful to record sessions with students, so that the student can review the lesson later on. Camtasia has a more advanced video editor, and also offers mobile recording for Mac by connecting your tablet or phone to your Mac with a lightning cable.

Video editing: Both Camtasia and the pro version of Screen-o-Matic come with video editing tools built-in. It allows you to add music, photos and text to your videos, and is great for producing online course content.

To do it, tutors usually use Word Documents and PDFs, but as online tutoring evolves, more tutors are using web products and mobile apps to help with things like assigning homework. Extempore allows students to record their homework answers directly from their mobile device, which makes it easy for tutors to review in real-time. Squid: Squid free with in-app downloads is a handwritten note-taking application for Android that has two great use cases for teachers.

First, teachers can import PDF worksheets into Squid and students can do the work directly on the sheets before exporting and sending them back to the teach for review.

Second, tutors can import work that the student has done, mark it up, and send it back to the student to see what they did wrong.

Similar files:


Copyright © 2019 aracer.mobi. All rights reserved.
DMCA |Contact Us