Which encoding uses eight bits per character with one bit used for error checking?

Prepare for the IT GACE Computer Science Test. Use flashcards and multiple choice questions with hints and explanations. Excel in your exam preparation!

Multiple Choice

Which encoding uses eight bits per character with one bit used for error checking?

Explanation:
Think about how an encoding can include a simple check for errors. ASCII is the 7-bit code for basic characters, and on many systems an extra bit is added to make eight bits per character. That extra bit can serve as a parity bit to catch simple transmission errors, giving you an eight-bit representation with one error-check bit. The others don’t match this description as precisely: Unicode and UTF-8 are designed to handle a vast range of characters and use variable-length encoding rather than a fixed eight-bit-per-character scheme with a dedicated parity bit. EBCDIC is an eight-bit encoding used on IBM systems, but it isn’t defined by using a parity bit for error checking in its standard form.

Think about how an encoding can include a simple check for errors. ASCII is the 7-bit code for basic characters, and on many systems an extra bit is added to make eight bits per character. That extra bit can serve as a parity bit to catch simple transmission errors, giving you an eight-bit representation with one error-check bit.

The others don’t match this description as precisely: Unicode and UTF-8 are designed to handle a vast range of characters and use variable-length encoding rather than a fixed eight-bit-per-character scheme with a dedicated parity bit. EBCDIC is an eight-bit encoding used on IBM systems, but it isn’t defined by using a parity bit for error checking in its standard form.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy