一、FLV格式文件结构
FLV文件的结构可以划分为两大块:文件头和文件体。文件体由ScriptTag、VideoTag和AudioTag组成
参考资料
Adobe FLV 官方文档
https://www.adobe.com/content/dam/Adobe/en/devnet/flv/pdfs/video_file_format_spec_v10.pdf
http://download.macromedia.com/f4v/video_file_format_spec_v10_1.pdf
FLV头:
Field | Type | Comment |
---|---|---|
签名(Signature) | UI8 | 'F'(0X46) |
签名(Signature) | UI8 | 'L'(0X4C) |
签名(Signature) | UI8 | 'V'(0x56) |
版本(Version) | UI8 | FLV的版本。0x01表示FLV 版本是1 |
保留字段(TypeFlagsReserved) | UB5 | 前五位必须是0 |
是否有音频流(TypeFlagsAudio) | UB1 | 音频Tag是否存在 |
保留字段(TypeFlagsReserved) | UB1 | 必须是0 |
是否有视频流(TypeFlagsVideo) | UB1 | 视频Tag是否存在 |
文件头大小(DataOffset) | UI32 | FLV版本1时填写9,表明的是FLV头的大小,为后期的FLV版本扩展使用。包括这四个字节。 数据的起始位置就是从文件开头偏移这么多的大小。 |
Signature: FLV 文件的前3个字节为固定的’F’、’L’、’V’,用来标识这个文件是flv格式的。在做格式探测的时候,如果发现前3个字节为“FLV”,就认为它是flv文件。
Flags: 第5个字节中的第0位和第2位,分别表示 video 与 audio 存在的情况(1表示存在,0表示不存在)。
DataOffset : 最后4个字节表示FLV header 长度。
FLV文件体结构:
Field | Type | Comment |
---|---|---|
PreviousTagSize0 | UI32 | 总是0,前一个没有TAG,所以是0 |
Tag1 | FLVTAG | 第一个Tag |
PreviousTagSize1 | UI32 | 前一个TAG的大小 |
Tag2 | FLVTAG | |
PreviousTagSize2 | UI32 | |
... | UB1 | |
PreviousTagSize[N-1] | UB1 | |
Tag[N] | UB1 | |
PreviousTagSize[N] | UI32 |
FLV头之后,就是FLV文件体。
FLV文件体是由一连串的back-pointers + tags构成.back-pointers就是4个字节数据,表示前一个tag的size
FLV Tag结构:
Field | Type | Comment |
---|---|---|
TAG类型 | UI8 | 8: audio 9: video 18: script data——Metadata。 all others: reserved其他所有值未使用。 |
数据大小 | UI24 | 数据区的大小,不包括包头。包头总大小是11个字节。 |
时间戳 | UI24 | 当前帧时戳,单位是毫秒。相对于FLV文件的第一个TAG时戳。第一个tag的时戳总是0。——不是时戳增量,rtmp中是时戳增量。 |
时戳扩展字段 | UI8 | 如果时戳大于0xFFFFFF,将会使用这个字节。这个字节是时戳的高8位,上面的三个字节是低24位。 |
流ID | UI24 | 总是0 |
数据区 | UI8[N] |
Audio Tag:
Field | Type | Comment |
---|---|---|
音频格式(SoundFormat) | UB4 | 0 = Linear PCM, platform endian 1 = ADPCM 2 = MP3 3 = Linear PCM, little endian 4 = Nellymoser 16-kHz mono 5 = Nellymoser 8-kHz mono 6 = Nellymoser 7 = G.711 A-law logarithmic PCM 8 = G.711 mu-law logarithmic PCM 9 = reserved 10 = AAC 11 = Speex 14 = MP3 8-Khz 15 = Device-specific sound 7, 8, 14, and 15:内部保留使用。 flv是不支持g711a的,如果要用,可能要用线性音频。 |
采样率 | UB2 | For AAC: always 3 0 = 5.5-kHz 1 = 11-kHz 2 = 22-kHz 3 = 44-kHz |
采样大小 | UB1 | 0 = snd8Bit 1 = snd16Bit |
声道 | UB1 | 0=单声道 1=立体声,双声道。AAC永远是1 |
AACPacketType | IF SoundFormat == 10 UI8 | The following values are defined: 0 = AAC sequence header 1 = AAC raw |
如果音频格式(SoundFormat)是10 = AAC,AudioTagHeader中会多出1个字节的数据AACPacketType,这个字段来表示AACAUDIODATA的类型:0 = AAC sequence header,1 = AAC raw
AAC AudioData格式:
Field | Type | Comment |
---|---|---|
Data | IF AACPacketType ==0 AudioSpecificConfig | The AudioSpecificConfig is defined in ISO14496-3. Note that this is not the same as the contents of the esds box from an MP4/F4V file. |
ELSE IF AACPacketType == 1 Raw AAC frame data in UI8[n] | 音频负载数据 |
VideoTag
Field | Type | Comment |
---|---|---|
帧类型 | UB4 | 1: keyframe (for AVC, a seekable frame)——h264的IDR,关键帧。 2: inter frame (for AVC, a non- seekable frame)——h264的普通帧 3: disposable inter frame (H.263 only) 4: generated keyframe (reserved for server use only) 5: video info/command frame |
编码ID | UB4 | 使用哪种编码类型: 1: JPEG (currently unused) 2: Sorenson H.263 3: Screen video 4: On2 VP6 5: On2 VP6 with alpha channel 6: Screen video version 2 7: AVC |
视频数据 | UI[N] | 如果是avc,则参考下面的介绍: AVCVIDEOPACKET |
AVC VIDEO PACKET:
Field | Type | Comment |
---|---|---|
AVC packet类型 | UI8 | 0:AVC序列头 1:AVC NALU单元 2:AVC序列结束。低级别avc不需要。 |
CTS | SI24 | 如果AVC packet类型是1,则为cts偏移,为0则为0 |
数据 | UI8[n] | 如果AVC packet类型是0,则是解码器配置,sps,pps。 如果是1,则是NAL单元,可以是多个 |
AVC VIDEO PACKET data格式:
Field | Type | Comment |
---|---|---|
长度 | UI32 | NAL单元(NAL unit,建成NALU)的长度,不包括长度字段。 |
nalu数据 | UI8[N] | NALU数据,没有四个字节的NAL单元头,直接从h264头开始,比如:65 ** ** **,41 ** ** ** |
长度 | UI32 | NAL单元的长度,不包括长度字段。 |
nalu数据 | UI8[N] | NALU数据,没有四个字节的NAL单元头,直接从h264头开始,比如:65 ** ** **,41 ** ** ** |
... | ... | ... |
二、ExoPlayer解析FLV文件流程
1、通过DataSource打开数据源
2、通过提取器接口Extractor提取(读取)数据,这里的Extractor为FlvExtractor,循环调用Extractor的read方法,直到不需要继续读取。
final class ExtractingLoadable implements Loadable{ …… //打开媒体资源 length=dataSource.open(newDataSpec(uri,position,C.LENGTH_UNSET,Util.sha1(uri.toString()))); …… input=newDefaultExtractorInput(dataSource,position,length); …… while(未取消&&需要继续读取){ …… result=extractor.read(input,positionHolder);//从输入源中读取 …… } } }
3、FlvExtractor读取数据
public final class FlvExtractor implements Extractor,SeekMap{ @Override publicintread(ExtractorInputinput,PositionHolderseekPosition)throwsIOException, InterruptedException{ while(true){ switch(parserState){ case STATE_READING_FLV_HEADER: //读取FLVHeader if(!readFlvHeader(input)){ returnRESULT_END_OF_INPUT; } break; case STATE_SKIPPING_TO_TAG_HEADER: //跳过前上一个读取过的TAG,跳至下一个TAG的头 skipToTagHeader(input); break; caseSTATE_READING_TAG_HEADER: //读取每个TAG的Header信息 if(!readTagHeader(input)){ returnRESULT_END_OF_INPUT; } break; case STATE_READING_TAG_DATA: //读取TAG数据 if(readTagData(input)){ returnRESULT_CONTINUE; } break; } } } }
3.1.读取FLVHeader
private boolean readFlvHeader(ExtractorInput input) throws IOException,InterruptedException{ //固定读取从0开始的9个字节数据 if(!input.readFully(headerBuffer.data,0,FLV_HEADER_SIZE,true)){ //We'vereachedtheendofthestream. return false; } headerBuffer.setPosition(0); headerBuffer.skipBytes(4);//跳过前4个字节,分别为'F'、'L'、'V'和版本号 int flags=headerBuffer.readUnsignedByte();//读取下一个字节 boolean hasAudio=(flags&0x04)!=0;//是否有音频 boolean hasVideo=(flags&0x01)!=0;//是否有视频 if(hasAudio&&audioReader==null){ //有音频且AudioTagPayloadReader未创建,则创建AudioTagPayloadReader audioReader=newAudioTagPayloadReader(extractorOutput.track(TAG_TYPE_AUDIO)); } if(hasVideo&&videoReader==null){ //有视频且VideoTagPayloadReader未创建,则创建VideoTagPayloadReader videoReader=newVideoTagPayloadReader(extractorOutput.track(TAG_TYPE_VIDEO)); } if(metadataReader==null){ //创建ScriptTagPayloadReader,用来读取Metadata数据 metadataReader=newScriptTagPayloadReader(null); } extractorOutput.endTracks(); extractorOutput.seekMap(this); //WeneedtoskipanyadditionalcontentintheFLVheader,plusthe4byteprevioustagsize. bytesToNextTagHeader=headerBuffer.readInt()-FLV_HEADER_SIZE+4; parserState=STATE_SKIPPING_TO_TAG_HEADER; return true; }
3.2.读取TAGHeader
private boolean readTagHeader(ExtractorInput input) throws IOException,InterruptedException{ if(!input.readFully(tagHeaderBuffer.data,0,FLV_TAG_HEADER_SIZE,true)){ //We'vereachedtheendofthestream. return false; } tagHeaderBuffer.setPosition(0); tagType=tagHeaderBuffer.readUnsignedByte();//读取1字节数据,其中记录TAG的类型(8:audio,9:video,18:Metadata) tagDataSize=tagHeaderBuffer.readUnsignedInt24();//读取24位,即3字节,记录数据区大小 tagTimestampUs=tagHeaderBuffer.readUnsignedInt24();//读取24位,TAG的时间戳 tagTimestampUs=((tagHeaderBuffer.readUnsignedByte()<<24)|tagTimestampUs)*1000L; tagHeaderBuffer.skipBytes(3);//streamId跳过3字节流ID数据,因为它总是0 //剩余的8*N位数据就是各TAG的数据区了,将state置为“读取TAG数据状态”,下一次循环便是读取具体的TAG数据 parserState=STATE_READING_TAG_DATA; return true; }
3.3.读取TAG Data
根据TagType对各种类型的TagData进行读取,每次循环只读取一种类型的TagData:
private boolean readTagData(ExtractorInput input) throws IOException, InterruptedException { boolean wasConsumed = true; if (tagType == TAG_TYPE_AUDIO && audioReader != null) { //音频 audioReader.consume(prepareTagData(input), tagTimestampUs); } else if (tagType == TAG_TYPE_VIDEO && videoReader != null) { //视频 videoReader.consume(prepareTagData(input), tagTimestampUs); } else if (tagType == TAG_TYPE_SCRIPT_DATA && metadataReader != null) { //metadata metadataReader.consume(prepareTagData(input), tagTimestampUs); } else { input.skipFully(tagDataSize); wasConsumed = false; } bytesToNextTagHeader = 4; // There's a 4 byte previous tag size before the next header. parserState = STATE_SKIPPING_TO_TAG_HEADER; return wasConsumed; }
VideoReader、AudioReader、MetadataReader均继承自TagPayloadReader,其consume方法内部调用抽象方法parseHeader和parsePayload对各种TagHeader和TagData进行解析,解析过程都是先解析头后解析负载:
public final void consume(ParsableByteArray data, long timeUs) throws ParserException { if (parseHeader(data)) { parsePayload(data, timeUs); } }
首先调用的是MetadataReader,得到视音频编码方式等媒体信息,之后的循环会调用或对音频或视频Tag进行解析。下面分别对VideoReader和AudioReader进行介绍。
3.3.1.VideoReader解析VideoTag:
解析VideoTag头数据
protected boolean parseHeader(ParsableByteArray data) throws UnsupportedFormatException { int header = data.readUnsignedByte();//读取1字节数据 int frameType = (header >> 4) & 0x0F;//前4位代表帧类型 1:关键帧,2:普通帧。其它3/4/5类型说明 int videoCodec = (header & 0x0F);//后4位代表编码方式,7 // Support just H.264 encoded content. if (videoCodec != VIDEO_CODEC_AVC) { //不是AVC编码的不能播放,直接抛出异常 throw new UnsupportedFormatException("Video format not supported: " + videoCodec); } this.frameType = frameType; return (frameType != VIDEO_FRAME_VIDEO_INFO); }
解析VideoTag负载数据
protected void parsePayload(ParsableByteArray data, long timeUs) throws ParserException { //读取1字节,AVC packet类型 // 0:AVC序列头 // 1:AVC NALU单元 // 2:AVC序列结束 int packetType = data.readUnsignedByte(); int compositionTimeMs = data.readUnsignedInt24(); timeUs += compositionTimeMs * 1000L; // Parse avc sequence header in case this was not done before. // Parse avc sequence header in case this was not done before. if (packetType == AVC_PACKET_TYPE_SEQUENCE_HEADER && !hasOutputFormat) { //读取ACV packet序列头 ParsableByteArray videoSequence = new ParsableByteArray(new byte[data.bytesLeft()]); data.readBytes(videoSequence.data, 0, data.bytesLeft()); AvcConfig avcConfig = AvcConfig.parse(videoSequence); nalUnitLengthFieldLength = avcConfig.nalUnitLengthFieldLength; // Construct and output the format. Format format = Format.createVideoSampleFormat(null, MimeTypes.VIDEO_H264, null, Format.NO_VALUE, Format.NO_VALUE, avcConfig.width, avcConfig.height, Format.NO_VALUE, avcConfig.initializationData, Format.NO_VALUE, avcConfig.pixelWidthAspectRatio, null); output.format(format); hasOutputFormat = true; }else if (packetType == AVC_PACKET_TYPE_AVC_NALU) { //读取ACV packet NALU单元 // TODO: Deduplicate with Mp4Extractor. // Zero the top three bytes of the array that we'll use to decode nal unit lengths, in case // they're only 1 or 2 bytes long. byte[] nalLengthData = nalLength.data; nalLengthData[0] = 0; nalLengthData[1] = 0; nalLengthData[2] = 0; int nalUnitLengthFieldLengthDiff = 4 - nalUnitLengthFieldLength; // NAL units are length delimited, but the decoder requires start code delimited units. // Loop until we've written the sample to the track output, replacing length delimiters with // start codes as we encounter them. int bytesWritten = 0; int bytesToWrite; while (data.bytesLeft() > 0) { //读取NAL数据单元 // Read the NAL length so that we know where we find the next one. data.readBytes(nalLength.data, nalUnitLengthFieldLengthDiff, nalUnitLengthFieldLength); nalLength.setPosition(0); bytesToWrite = nalLength.readUnsignedIntToInt(); // Write a start code for the current NAL unit. nalStartCode.setPosition(0); output.sampleData(nalStartCode, 4); bytesWritten += 4; // Write the payload of the NAL unit. //输出负载数据 output.sampleData(data, bytesToWrite); bytesWritten += bytesToWrite; } output.sampleMetadata(timeUs, frameType == VIDEO_FRAME_KEYFRAME ? C.BUFFER_FLAG_KEY_FRAME : 0, bytesWritten, 0, null); } }
3.3.2.AudioReader解析AudioTag:
解析AudioTag头数据
protected boolean parseHeader(ParsableByteArray data) throws UnsupportedFormatException { if (!hasParsedAudioDataHeader) { int header = data.readUnsignedByte();//读取第1字节数据 int audioFormat = (header >> 4) & 0x0F;//前4位代表音频格式,AAC:10 int sampleRateIndex = (header >> 2) & 0x03;//接着2位代表采样率,44000:3 if (sampleRateIndex < 0 || sampleRateIndex >= AUDIO_SAMPLING_RATE_TABLE.length) { throw new UnsupportedFormatException("Invalid sample rate index: " + sampleRateIndex); } // TODO: Add support for MP3 and PCM. if (audioFormat != AUDIO_FORMAT_AAC) { //暂时不支持除AAC以外的音频格式 throw new UnsupportedFormatException("Audio format not supported: " + audioFormat); } hasParsedAudioDataHeader = true; } else { // Skip header if it was parsed previously. data.skipBytes(1);//跳过第一字节 } return true; }
解析AudioTag负载数据
protected void parsePayload(ParsableByteArray data, long timeUs) { /** * 如果是AAC,负载数据的第一个字节为AACPacketType: * 0 = AAC sequence header(AAC序列头) * 1 = AAC raw(音频ES流) */ int packetType = data.readUnsignedByte(); // Parse sequence header just in case it was not done before. if (packetType == AAC_PACKET_TYPE_SEQUENCE_HEADER && !hasOutputFormat) { byte[] audioSpecifiConfig = new byte[data.bytesLeft()]; data.readBytes(audioSpecifiConfig, 0, audioSpecifiConfig.length); Pair<Integer, Integer> audioParams = CodecSpecificDataUtil.parseAacAudioSpecificConfig( audioSpecifiConfig); Format format = Format.createAudioSampleFormat(null, MimeTypes.AUDIO_AAC, null, Format.NO_VALUE, Format.NO_VALUE, audioParams.second, audioParams.first, Collections.singletonList(audioSpecifiConfig), null, 0, null); output.format(format); hasOutputFormat = true; } else if (packetType == AAC_PACKET_TYPE_AAC_RAW) { // Sample audio AAC frames int bytesToWrite = data.bytesLeft(); output.sampleData(data, bytesToWrite); output.sampleMetadata(timeUs, C.BUFFER_FLAG_KEY_FRAME, bytesToWrite, 0, null); } }
三、解密关键帧数据
由于VideoReader、AudioReader均继承自TagPayloadReader,在TagPayloadReader中增加解密数据的方法:
/** * * @param originBytes 原始数据 * @return 解密后数据 */ protected byte[] decryptBytes(byte[] originBytes) { try { final Cipher cipher = Cipher.getInstance("AES/ECB/PKCS7Padding"); cipher.init(Cipher.DECRYPT_MODE, ecbSecretKey);//ecbSecretKey为解密key return cipher.doFinal(originBytes); } catch (Exception e) { e.printStackTrace(); } return originBytes; }
需要对数据解密时,只需调用父类的decryptBytes方法即可。
因此,在VideoReader、AudioReader加入解密代码:
//VideoReader代码片段 //………… else if (packetType == AVC_PACKET_TYPE_AVC_NALU) { //对关键帧数据进行解密 Log.i("VIDEO", "是否关键帧 = " + (frameType == VIDEO_FRAME_KEYFRAME)); //TODO 在while循环之前就应该把data.bytesLeft()这部分加密的数据解出来。 if (frameType == VIDEO_FRAME_KEYFRAME) { try { final byte[] tmpData = new byte[data.bytesLeft()]; data.readBytesToBuffer(tmpData, 0, data.bytesLeft()); final byte[] decryptBytes = decryptBytes(tmpData);//解密数据 data.reset(decryptBytes, decryptBytes.length, 0); } catch (Exception e) { e.printStackTrace(); } } //解密关键帧数据结束 while (data.bytesLeft() > 0) { //读取NAL数据单元 //…… } //………… }
//AudioReader代码片段 //………… else if (packetType == AAC_PACKET_TYPE_AAC_RAW) { // Sample audio AAC frames //TODO 在这里将加密的data.bytesLeft()音频帧数据解密 try { final byte[] tmpData = new byte[data.bytesLeft()]; data.readBytesToBuffer(tmpData,0,data.bytesLeft()); final byte[] decryptBytes = decryptBytes(tmpData);//解密数据 final long start = System.nanoTime(); Log.i("AUDIO", "lost time:"+(System.nanoTime()-start)); data.reset(decryptBytes,decryptBytes.length); } catch (Exception e) { e.printStackTrace(); } int bytesToWrite = data.bytesLeft(); output.sampleData(data, bytesToWrite); output.sampleMetadata(timeUs, C.BUFFER_FLAG_KEY_FRAME, bytesToWrite, 0, null); }
以上,只需在播放关键帧加密的FLV视频时提供正确的key,就可以实现关键帧加密FLV的正常播放。
四、使ExoPlayer支持FLV文件seek播放
FLV文件没有建立器keyframe索引,每次拖动播放都必须从当前位置加载所有Tag数据,直到目标位置,缓冲时间长,效果极差。民间的一般做法是在ScriptTag中加入keyframes数据,用来建立起关键帧和文件位置的索引,这是一种非官方的标。
keyframes字段包含filepositions和times两个集合。times字段的值为关键帧数组,filepositions字段的值为times字段关键帧对应的在文件中的位置数组。filepositions和times两个数组的长度必须相同,数据类型均为double类型,且fileposition和time一一对应。
在播放器解析ScriptTag的负载数据时,将keyframes中的两个集合filepositions、times读取出来。在用户进行seek操作时,根据用户seek到的时间点,去times数组中比对跟这个时间点最接近值,拿到这个值在times数组中的位置mostCloseTimeIndex,由于fileposition和time一一对应,因此根据这个mostCloseTimeIndex去filepositions取对应位置的position值,就是用户拖动的时间点最接近的关键帧在文件中的位置。从该位置读取数据源,即可实现seek播放。
具体操作见代码片段:
1、从ScriptTag中提取keyframes数据
protected void parsePayload(ParsableByteArray data, long timeUs) throws ParserException { ………… //有hasKeyframes域,且hasKeyframes的值为true if (metadata.containsKey(KEY_HAS_KEYFRAMES) && (boolean) metadata.get(KEY_HAS_KEYFRAMES)) { Log.i("ScriptReader", "hasKeyframes = true"); //从元数据中取keyframes数据 final HashMap<String, ArrayList<Double>> keyframes = (HashMap) metadata.get(KEY_KEYFRAMES); //keyframes不为null,且filepositions、times字段均存在 if (keyframes != null && keyframes.containsKey(KEY_FILE_POSITIONS) && keyframes.containsKey(KEY_TIMES)) { //取filepositions集合 mFilePositions = keyframes.get(KEY_FILE_POSITIONS); //取times集合 mTimes = keyframes.get(KEY_TIMES); //只有在filepositions、times集合都不为空且长度相等时,在已进行seek if (mFilePositions != null && mFilePositions.size() > 0 && mTimes != null && mTimes.size() == mFilePositions.size()) mSeekable = true; Log.i("ScriptReader", "filePositions = " + mFilePositions); Log.i("ScriptReader", "times = " + mTimes); } } else { //否则不可seek mSeekable = false; } }
2、FlvExtractor中的操作
在读取完ScriptTag之后,得到seekable状态:
private boolean readTagData(ExtractorInput input) throws IOException, InterruptedException { boolean wasConsumed = true; if (tagType == TAG_TYPE_AUDIO && audioReader != null) { //音频 audioReader.consume(prepareTagData(input), tagTimestampUs); } else if (tagType == TAG_TYPE_VIDEO && videoReader != null) { //视频 videoReader.consume(prepareTagData(input), tagTimestampUs); } else if (tagType == TAG_TYPE_SCRIPT_DATA && metadataReader != null) { //metadata metadataReader.consume(prepareTagData(input), tagTimestampUs); mSeekAble = metadataReader.isSeekAble();//获取seekable状态 } else { input.skipFully(tagDataSize); wasConsumed = false; } bytesToNextTagHeader = 4; // There's a 4 byte previous tag size before the next header. parserState = STATE_SKIPPING_TO_TAG_HEADER; return wasConsumed; }
根据用户操作拖动的时间点,获取对应的应该开始加载的文件位置,并返回,数据加载器会从这个位置开始加载数据,跳过中间部分:
public long getPosition(long timeUs) { final long startT = System.nanoTime(); double tmpTime = (double) timeUs / 1000 / 1000; final ArrayList<Double> times = metadataReader.getTimes(); double tmpAbs = 0; int mostCloseIndex = 0; //取给定时间最接近的关键帧时间点 for (int i = 0; i < times.size(); i++) { final double abs = Math.abs(times.get(i) - tmpTime); if (tmpAbs == 0) { tmpAbs = abs; mostCloseIndex = i; } else if (abs <= tmpAbs) { tmpAbs = abs; mostCloseIndex = i; } } final double mostCloseTime = metadataReader.getTimes().get(mostCloseIndex); //取给定时间最接近的关键帧时间点对应的在文件中的位置 final double mostClosePos = metadataReader.getFilePositions().get(mostCloseIndex); Log.i("FlvExtractor", "getPosition lost time : " + (System.nanoTime()-startT)); Log.i("FlvExtractor", "mostCloseIndex: " + mostCloseIndex + ",mostCloseTime: " + mostCloseTime + ",mostClosePos: " + mostClosePos); Log.i("FlvExtractor", "tmpTime: " + tmpTime); Log.i("FlvExtractor", "getPosition: " + timeUs); return (long) mostClosePos; }
从指定位置开始加载数据
ExtractingLoadable loadable = new ExtractingLoadable(uri, dataSource, extractorHolder, loadCondition); ······· loadable.setLoadPosition(seekMap.getPosition(pendingResetPositionUs));// ·······
final class ExtractingLoadable implements Loadable{ ······ public void setLoadPosition(long position) { positionHolder.position = position; pendingExtractorSeek = true; } ······ }
dataSource.open( new DataSpec(uri, position, C.LENGTH_UNSET, Util.sha1(uri.toString())));//打开媒体资源
private HttpURLConnection makeConnection(URL url, byte[] postBody, long position, long length, boolean allowGzip, boolean followRedirects) throws IOException { HttpURLConnection connection = (HttpURLConnection) url.openConnection(); connection.setConnectTimeout(connectTimeoutMillis); connection.setReadTimeout(readTimeoutMillis); synchronized (requestProperties) { for (Map.Entry<String, String> property : requestProperties.entrySet()) { connection.setRequestProperty(property.getKey(), property.getValue()); } } if (!(position == 0 && length == C.LENGTH_UNSET)) { String rangeRequest = "bytes=" + position + "-";//从指定位置开始加载数据 if (length != C.LENGTH_UNSET) { rangeRequest += (position + length - 1); } connection.setRequestProperty("Range", rangeRequest); } ·········
以上便可实现flv的拖动播放。
请问支持FLV文件seek播放部分
getPosition 及后续代码是添加在那里?
1.关于getPosition,它是SeekMap接口中的一个方法,FlvExtractor(com.google.android.exoplayer2.extractor.flv.FlvExtractor)实现了它但是getPosition返回0(即官方没有实现FLV的seek功能),因此需要自己做getPosition的逻辑。其中,final ArrayList times = metadataReader.getTimes(); 及metadataReader.getFilePositions()这几行代码中的metadataReader是ScriptTagPayloadReader,你需要在consume方法中解析并缓存上文提到的“民间方法”加入进去的关键帧位置信息,在getPosition方法中进行匹配。
2.后面两块代码,是ExtractorMediaPeriod、DefaultHttpDataSource中本来就有的,是通用的其它封装格式媒体的seek逻辑;
具体怎么实现,看一下EXOPlayer 2.0的源码,结合文章应该就能明白。