Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Tftp protocol implementation and difference between netascii and octect

I'm building an application that is a server that implements the tftp protocol. I'm having hard times in understand what's the difference between ASCII format and binary format (netascii and octect) in tftp, and how should I read files differently as the protocol states.

I know that an ASCII char can be rapresented with a single byte. So I don't understand what's the difference between reading in ascii mode (1 byte each character) and binary mode (1 raw byte).

I can read the file with flag ios::binary for binary mode (octet in tftp) and without it for ascii (netascii in tftp), but I really don't understand what's the difference in reading files in these two ways (I always come up with an array of bytes).

If someone can help me understand, I'll really appreciate it

The tftp protocol specification: http://www.rfc-editor.org/rfc/rfc1350.txt

The part that I don't understand is this one:

Three modes of transfer are currently supported: netascii (This is ascii as defined in "USA Standard Code for Information Interchange"
[1] with the modifications specified in "Telnet Protocol
Specification" [3].) Note that it is 8 bit ascii. The term
"netascii" will be used throughout this document to mean this
particular version of ascii.); octet (This replaces the "binary" mode of previous versions of this document.) raw 8 bit bytes; mail,
netascii characters sent to a user rather than a file. (The mail
mode is obsolete and should not be implemented or used.) Additional
modes can be defined by pairs of cooperating hosts.

like image 541
Francesco Belladonna Avatar asked Aug 18 '11 00:08

Francesco Belladonna


People also ask

What is Netascii?

Netascii is a modified form of ASCII, defined in RFC 764. It consists of an 8-bit extension of the 7-bit ASCII character space from 0x20 to 0x7F (the printable characters and the space) and eight of the control characters.

Which protocol is used by TFTP at the transport layer and why?

Using ASCII (American Standard Code for Information Interchange) and binary modes, file transfers complete faster since TFTP uses UDP (User Datagram Protocol) for the transport layer protocol, which is much simpler compared to the complicated TCP (Transmission Control Protocol).

What protocol does TFTP use?

TFTP uses the User Datagram Protocol (UDP) to transport data from one end to another. TFTP is mostly used to read and write files/mail to or from a remote server.


1 Answers

There are two passages which can help clarify what the purpose of netascii is in RFC-1350/TFTP:

netascii (This is ascii as defined in "USA Standard Code for Information Interchange" [1] with the modifications specified in "Telnet Protocol Specification" [3].)

The "Telnet Protocol Specification" is RFC-764, and it describes the interpretation of various ASCII codes for use on the "Network Virtual Terminal". So, netascii would follow those interpretations (which include that lines must be terminated with a CR/LF sequence).

and:

A host which receives netascii mode data must translate the data to its own format.

So a host that used EBCDIC as it's native encoding, for example, might be expected to translate netascii to that encoding, but would leave "octet" data alone.

If you're implementing the TFTP server on a Unix (or other) system that uses LF for line endings, you'd be expected to add the CR for netascii transfers (as well as convert actual CR characters in the file to CR/NUL sequences.

like image 157
Michael Burr Avatar answered Nov 15 '22 02:11

Michael Burr