Selasa, 12 Juni 2018

Sponsored Links

RTOS (Real Time Operating System) example for PIC16F887 using CCS ...
src: i.ytimg.com

The real-time RTOS operating system is an operating system (OS) intended to serve real-time applications that process data when logged in, usually without buffer delays. Processing time requirements (including OS delay) are measured in a tenth of a second or a shorter time increase. Real-time systems are time-bound systems that have fixed fixed time constraints. Processing must be done within specified limits or the system will fail. They are also driven by events or time-sharing. Event driven systems switch between tasks based on their priorities while the time division system diverts tasks based on clock interrupts.

The main characteristic of an RTOS is its level of consistency regarding the amount of time it takes to receive and complete an application task; the variability is jitter . Real-time operating system hard has less jitter than real time operating system software . The main design goal is not a high throughput, but rather a guarantee of soft or hard performance categories. The usual RTOS or generally meets the deadline is a soft real-time OS, but if it can meet the deterministic deadlines is a real-time hard OS.

RTOS has an advanced algorithm for scheduling. Scheduling flexibility allows the computer-system orchestration to be broader than process priority, but real-time OS is more often dedicated to a bunch of narrow apps. The key factors in real-time OS are minimal interrupt latency and minimal thread switching latency; Real-time OS is judged more for how fast or how predictable the response is compared to the amount of work it can do over a period of time.

View a comparison of real-time operating system for a comprehensive list. Also, see the list of operating systems for all types of operating systems.


Video Real-time operating system



Design philosophy

The most common design is

  • Enabling events - switch tasks only when a higher-priority event needs to be serviced; called preemptive priority, or priority scheduling.
  • Time sharing - switch tasks on regular clock interrupts, and on events; called round robin.

The time division design changes tasks more often than is absolutely necessary, but provides smoother multitasking, giving the illusion that a process or user uses only one machine.

The initial CPU design requires multiple cycles to switch tasks as long as the CPU can not do anything else useful. For example, with processor 20Ã, MHz 68000 (typical of the late 1980s), the switching time of the task is approximately 20 microseconds. (In contrast, the ARM 100Ã, MHz CPU (from 2008) switches in less than 3 microseconds.) Therefore, the initial OS tries to minimize the waste of CPU time by avoiding unnecessary task shifting.

Maps Real-time operating system



Scheduling

In a typical design, the task has three states:

  1. Run (execute on CPU);
  2. Ready (ready for execution);
  3. Blocked (waiting for events, I/O for example).

Most tasks are blocked or ready most of the time because generally only one task can be run per CPU. The number of items in the ready queue can vary greatly, depending on the number of tasks that the system needs to perform and the type of scheduler the system uses. In a non-preemptive system but still more simple multitasking, the task must pass the time on the CPU for other tasks, which can cause the queue to be ready to have a larger number of tasks in the ready to execute state (resource starvation).

Typically data structures from ready lists in the scheduler are designed to minimize the worst time span of time spent in the critical part of the scheduler, where preemption is inhibited, and, in some cases, all interrupts are disabled. But the choice of data structure also depends on the maximum number of tasks that can be on the ready list.

If there is never more than a few tasks in the ready list, then the dual task ready list is likely to be optimal. If the ready list usually contains only a few tasks but sometimes contains more, then the list should be sorted by priority. That way, finding the highest priority task to run does not require iteration through the entire list. Entering a task then requires a ready-to-run list until it reaches one end of the list, or a lower priority task than an inserted task.

Care must be taken not to inhibit preemption during this search. The longer critical section should be divided into smaller parts. If an interrupt creates a high priority task ready during insertion of a low priority task, high priority tasks can be inserted and executed immediately before low priority tasks are inserted.

A critical response time, sometimes called flyback time, is the time it takes to queue up a new ready task and restore the highest priority task state to run. In a well-designed RTOS, setting up a new task will require 3 to 20 instructions per ready line entry, and the highest priority recovery task will require 5 to 30 instructions.

In more sophisticated systems, real-time tasks share computing resources with many non-real-time tasks, and ready lists can be arbitrarily long. In such systems, ready scheduler lists that are applied as linked lists will not be sufficient.

Algorithm

Some commonly used RTOS scheduling algorithms are:

  • Co-operative scheduling
  • Preemptive scheduling
    • Monotonic-level scheduling
    • Round-robin scheduling
    • Pre-emptive priority scheduling fixed, implementation of preemptive time cuts
    • Priority-Priority Scheduling with Deferred Deferrals
    • Fixed Priorities for Non-Preemptive Scheduling
    • Scheduling critical preemptive sections
    • static timing scheduling
  • Initial Start Deadline
  • Stochastic diaphragm with multi-traversal graphs

Types of Operating System (Batch, Distributed, Time Sharing, Real ...
src: i.ytimg.com

Communications and intra-resource sharing

Multitasking operating systems like Unix are bad in real-time tasks. Schedulers give highest priority to jobs with the lowest demand on the computer, so there is no way to ensure that critical time-consuming jobs will have access to sufficient resources. The multitasking system must manage the sharing of data and hardware resources among many tasks. It is usually not safe for two tasks to access the same specific data or hardware resources simultaneously. There are three common approaches to solving this problem:

Temporarily hide/disable interruptions

A general-purpose operating system usually does not allow user programs to mask (disable) interruptions, because user programs can control the CPU as long as they want. Some modern CPUs do not allow user mode codes to disable interruptions because they are considered primary operating system resources. Many embedded systems and RTOS, however, allow the application itself to run in kernel mode for greater system call efficiency and also to allow applications to have greater control over the operating environment without requiring OS intervention.

On a single processor system, applications running in kernel mode and masking interrupts are the lowest overhead method to prevent simultaneous access to shared resources. When the interrupt is hidden and the current task does not make the OS call blocking, the current task has an exclusive CPU CPU usage because there are no other tasks or interruptions that can control, so that the critical sections are protected. When the task comes out of its critical part, it has to unmask the interruption; delayed interrupts, if any, will be executed. Interrupt while masking should only be done when the longest path through the critical section is shorter than the desired maximum interrupt latency. Usually this protection method is only used when the critical part is only a few instructions and does not contain a loop. This method is ideal for protecting bit-mapped hardware registers when bits are controlled by different tasks.

Semaphores binary

When shared resources must be protected without blocking all other tasks (such as waiting for Flash memory to be written), it is better to use mechanisms that are also available on common operating systems, such as semaphores and OS-supervised interprocess messaging. Such mechanisms involve system calls, and typically call the OS dispatcher code when out, so they usually take hundreds of CPU instructions for execution, while masking interrup can take at least one instruction on multiple processors.

A binary semaphore can be locked or unlocked. When locked, the task must wait for the semaphore to unlock. Therefore, a binary semaphore is equivalent to a mutex. Typically the task will set a timeout on waiting for the semaphore. There are some known issues with semaphore based designs such as priority inversion and deadlock.

In inversion priority, high priority tasks wait because low priority tasks have semaphores, but lower priority tasks are not given CPU time to complete their work. A typical solution is to have a task that has a semaphore that runs on, or 'inherits' the priority of the highest pending task. But this simple approach fails when there are multiple levels of waiting: the A task waits for a binary semaphore locked by a B task, waiting for a binary semaphore that is locked by a C . Dealing with multiple levels of inheritance without including instability in the cycle is complicated and problematic.

In a deadlock, two or more tasks lock the semaphores without timeout and then wait forever for another task semaphore, creating a cyclic dependency. The simplest deadlock scenario occurs when two tasks take turns locking two semaphores, but in the opposite order. Deadlock is prevented by careful design or with a floored semaphore, which passes the semaphore control to a higher priority task under specified conditions.

Message delivery

Another approach to resource sharing is the task of sending messages in an organized messaging scheme. In this paradigm, resources are managed directly by only one task. When another task wants to interrogate or manipulate resources, it sends a message to the management task. Although their real-time behavior is less sharp than semaphore systems, simple message-based systems avoid most of the deadlocks of the protocol, and generally behave better than semaphore systems. However, problems like semaphores are possible. Inversion priority can occur when a task is working on a low priority message and ignores high priority messages (or messages that are indirectly from high priority tasks) in the incoming message queue. Deadlock protocols can occur when two or more tasks wait for each other to send a response message.

REAL TIME OPERATING SYSTEM - ppt download
src: slideplayer.com


Interrupt handler and scheduler

Because the interrupt handler blocks the highest priority task from running, and since the real-time operating system is designed to keep the latency thread to a minimum, interrupt handlers are usually kept as short as possible. Interrupt handlers reject all interactions with hardware if possible; usually all it takes is to acknowledge or disable the interrupt (so it will not happen again when the interrupt handler returns) and notify the task that needs to be done. This can be done by unblocking driver tasks via releasing semaphores, setting flags or sending messages. Scheduler often provides the ability to unblock tasks from an interrupt handler context.

An OS stores catalogs of objects it manages such as threads, mutex, memory, and so on. Updates to this catalog should be strictly controlled. For this reason it can be a problem when the interrupt handler calls the OS function while the application in action also does it. The OS function that is invoked from the interrupt handler can find the object database in an inconsistent state due to application update. There are two main approaches to overcome this problem: integrated architecture and segmented architecture. RTOS that implements an integrated architecture solves the problem by simply disabling the interrupt while the internal catalog is updated. The downside of this is that increased interrupt latency, potentially losing interruptions. The segmented architecture does not make direct OS calls but delegates OS related work to a separate handler. This handler runs at a higher priority than any thread but is lower than the interrupt handler. The advantage of this architecture is that it adds very few cycles to interrupt latency. As a result, OS that implements segmented architecture is more predictable and can handle higher levels of interruptions compared to integrated architecture.

Likewise System Management Modes on x86-compatible hardware can take some time before restoring control to the operating system. It is generally wrong to write real-time software for x86 Hardware.

Using the ThreadX Real-Time Operating System with IAR Embedded ...
src: i.ytimg.com


Memory allocation

Memory allocation is more important in real-time operating systems than in other operating systems.

First, for stability there is no memory leak (allocated memory, then not used but never freed). The device must work indefinitely, without the need to reboot. For this reason, dynamic memory allocation is criticized. Whenever possible, all necessary memory allocations are determined statically at compile time.

Another reason to avoid dynamic memory allocation is memory fragmentation. With frequent allocations and releasing small chunks of memory, situations can occur when memory is split into sections, in which case the RTOS can not allocate large continuous memory blocks, even if there is enough free memory. Second, the speed of allocation is important. The standard memory allocation scheme scans a bounded list of indeterminate lengths to find the appropriate empty memory blocks, which are unacceptable in the RTOS because memory allocations must occur within a certain time frame.

Because mechanical disks have longer and more unpredictable response times, swapping to disk files is not used for the same reasons as the RAM allocations discussed above.

A simple fixed-size-block algorithm works pretty well for simple embedded systems because of low overhead.

Embedded Systems
src: users.ece.utexas.edu


See also


Reasons for Using an RTOS, Real Time Operating System, with an MCU ...
src: i.ytimg.com


References

Source of the article : Wikipedia

Comments
0 Comments