IBM何時開始使用ASCII?


13

我試圖弄清楚IBM是何時轉換為ASCII的,何時ASCII成為世界範圍的標準。

此外,IBM是否在全世界範圍內使ASCII標準成為標準?

我發現的東西:

根據Wikipedia,IBM System / 360具有EBCDIC字符集。它是一種八位字符編碼,與七位ASCII編碼方案分開開發。

1968年3月11日,美國總統林登·約翰遜(Lyndon B. Johnson)要求美國聯邦政府購買的所有計算機都支持ASCII:

All computers and related equipment configurations brought into the Federal Government inventory on and after July 1, 1969, must have the capability to use the Standard Code for Information Interchange

我還發現了以下內容:

operating systems running on the IBM PC and its descendants use ASCII, as did AIX/370 and AIX/390 running on System/370 and System/390 mainframes

可以肯定地說,IBM從系統/ 370開始轉向ASCII嗎?

如果是這樣,可以肯定地說IBM從1970年代開始使用ASCII嗎?

如果是這樣,可以肯定地說System / 370具有許多克隆,因此ASCII在世界範圍內變得很流行嗎?

19

IBM started using ASCII before 1970; the 2260 terminal, released in 1964, used the unpublished (but ratified) 1965 version of the ASA X3.4 standard.

IBM mainframes still use EBCDIC, so I don’t think their popularity had much bearing on the popularity of ASCII (but other encodings’ popularity influenced IBM mainframes: their instruction set includes conversion instructions). The popularity of ASCII is overestimated from a Western perspective too: most Asian markets used other character encodings, and even European markets used more than ASCII (but European encodings include ASCII as a subset).


3

In the 1960s, IBM used a crazy variety of character codes. IBM was the king of punched cards, commonly known as "IBM cards", so many codes related to the sparse 12 bit codes used for those. However, even these were not fully standardized: different keypunch models used different character sets! 6-bit BCDIC was designed to easily map to the most common card code characters. Many IBM peripherals used codes closely related to BCDIC, but they generally required some translation specific to the peripheral. But printer chains could be changed. Then there were Selectric tilt/rotate codes, which only told the mechanism how to move: what character you got depended on what typeball you had on the spindle.

EBCDIC, extended BCDIC, was IBM's internal data exchange code for the 1130 and System/360, but IBM machines could also use ASCII as an external code.

I think it was the Teletype Model 33 that drove the popularity of ASCII. IBM's business was card-based batch processing mainframes, but time-sharing and single-user interactive systems were becoming available. In the beginning, the Model 33 was the terminal of choice because it was inexpensive and good enough for the job. If your focus is interactivity, there are no punched cards, and your keyboard, printer, and paper tape all use ASCII, it makes sense to make ASCII your common character code.


18

TL;DR:

  • ASCII was never intended for processing, just as an interface standard for data exchange (hence the name American Standard Code for Information Interchange)
  • IBM never switched, it still uses EBCDIC within mainframes and ASCII for communication.
  • IBM was a major proponent for ASCII, but not the sole force, and especially not international.
  • ASCII soared in international use in the 1970s as being recommended by ISO and ECMA - especially the later being the driving force due the huge variability within Europe.
  • (later) Mini computers and especially micro computers simply started out by using ASCII as well for processing as there is no reason for inventing a different (*1).

In Detail:

I’m trying to figure out when IBM switched to ASCII and when ASCII became world wide standard.

Well, IBM never switched.

EBCDIC is used due historic reasons within a /360 mainframe, but all outside connections (except for proprietary) are ASCII. In fact, as IBM was a driving force behind ASCII the /360 were prepared to switch for ASCII when it comes to BCD handling. Except, it never became useful and was dropped in the late 80s.

Moreover do IBM made ASCII world wide standard?

IBM was a major force for ASCII from the start, pushing for a standard. This included the use of preliminary standards or terminal system.. It was as well intended to be used for the /360 and any later machine, but standardisation took longer than expected, so they had to go forward with their own 8 bit code, EBCDIC, based on prior 6 bit and punch card codes.

While being a main player, IBM alone would not have been able to force it. Similar a US buying order could not do it - after all, it requested only compatibility with ASCII for information exchange, not operation in ASCII. A loophole big enough for anything in existence and to be invented to slip thru. All needed was some interface to accept ASCII data and reply using it.

ASCII only became an international standard when ECMA, the European Computer Manufacturers Association, recommended an ASCII based international variant in 1965, which became an international recommendation the in 1967 as ISO 646 and finally accepted in 1972. Here ISO 646-IRV defines a compatible base for all participating (latin) scripts. ASCII is at that point simply the US variant called ISO-646-US and relates to the 1968 version.

From the early 1970s on ISO-646 became the major encoding standard used for anything except mainframes.

Is it save to say, that IBM moved to ASCII starting from System / 370?

No. Beside the fact that /370 is more of renaming game than really a new series, the /360 was able to handle ASCII from the start. In fact, ASCII as a hardware supported feature got dropped from the line around the time it was renamed /390.

If so, is it safe to say, that IBM started use ASCII from 1970s?

No. IBM used ASCII already before it was a standard and continues to do so today.

And if so, is it safe to say, that system 370 had many clones, therefore ascii became popular world wide?

No, as /370s are still using EBCDIC as default character set. Unix on /370 and later is an exception. But ASCII can/is used on all communication (which was satisfied the mentioned law) with external systems.

In the IBM mainframe world (*2) two basic codesets were used:

  • EBCDIC for everything within the system, that is CPU, memory, disks, tapes and other storage, as well as remote systems.
  • ASCII for all communication to terminals and remote systems.

And this continues until today.

Modern (past 1970) EBCDIC became a full superset of ASCII. EBCDIC's structure reflects ASCII and is as well the reason why ISO 8859 contains two areas of control characters :)


*1 - Then again, some did, like Commodore (PET) or Apple (Apple II) that private codes may be helpful - except, these got confined to special areas and hidden beneath.

*2 - That is IBM and all hardware compatible systems like Hitachi, Fujitsu, Bull, Univac, RCA, Siemens, ...


2

I'm going to provide a terrible answer, but include a couple references that might be great for nostalgia. One is NostalgiaNerd on youtube, he provides a British viewpoint of IBM's shift to ASCII (OK they only did it through codepages, not really fully/completely ASCII).

The video is strangely titled, nothing about ASCII or EBCDIC in the name: "These keys shouldn't exist" (It's about the pipe sign, one has a space halfway down it on some keyboards) https://youtu.be/BktIY7VbrUs [Do note, this video might have a lot wrong info (per the comments on stack) I wasn't sure if I should strike or completely remove the link.]

If the video is error filled, NostalgiaNerd at least gives us some references to handwritten notepads from early development days of ASCII here: https://longstreet.typepad.com/thesciencebookstore/2012/03/heres-the-link.html


1

Answering one of your questions "When did ASCII become a worldwide standard", the answer is: never. The "A" stands for "American". At the time the US adopted ASCII, other countries were adopting their own variants, substituting different characters according to national needs: for example in the UK, "£" was substituted for "#". These variations were endorsed and harmonized by the international standard ISO 646. For many years if you bought a printer, for example, you would have to make some fiddly settings on installation to configure it to your preferred national variant of ISO 646. (Of course, many people, especially Americans, confused the terminology, and thought of all these standards as "ASCII with variations").

All of these were 7-bit standards, and in the 1980s they were largely superseded by 8-bit standards such as ISO 8859-1 also known as Latin-1. These too had regional variants, though with 8 bits a single variant was good enough for the whole of Western Europe. These standards generally had ASCII as a subset (or at least, the printable ASCII characters: control characters are another question). But the term "ASCII" persisted in popular usage, and you will see plenty of StackOverflow questions using the term "ASCII" to refer to characters with codes above 127 - indeed some people pretty well use "ASCII character" as a synonym for "character". But if you're talking standards, then ASCII per se was never a standard anywhere except the US.