Quantcast
Channel: Media Foundation Development for Windows Desktop forum
Viewing all 1079 articles
Browse latest View live

decode IMFSample by SourceReader?

$
0
0

Hi,

I want to use the SourceReader to read frames from a URL (video file) - without using the decoding part. If I`m right, I have to set the output format with the type returned by GetNativeMediaType.

In the end I will receive an IMFSample object containing the encoded / compressed video frame. Is it possible to get the specific frame type (GOP: I, P, B frame)?

In a second step I want to decode this encoded video frame without loading it again from the source file. Could you tell me the simplest way to do that? Should I use a custom media source and the SourceReader API again - using MFCreateSourceReaderFromMediaSource? The custom media source will contain the encoded video frame. Or is there a much easier method to acchieve that?

best regards

saoirse


SequencerSource EVR vs SequencerSource MP4 Sink

$
0
0

I'm writing an application that uses a SequencerSource and either the EVR or a MP4 sink. 

I have 3 videos and create a topology for each one.  If I use the EVR as the sink the videos play in sequence, no problem. 

If I swap the EVR for an MP4 Media Sink, the resulting file size looks correct (size of the three videos) but the duration is the length of the first video.  Also, only the first video plays back. 

I read that there is/was a bug where the media session only finalizes every other sink? Finalizing them didn't seems to help. 

Running the EVR and then the MP4 versions through MFTrace doesn't give anything obvious either.  The two appear pretty much identical. 

Any ideas gratefully & hugely appreciated!

Encoding framerate and time problem on Media Foundation

$
0
0

I am writing the program which encodes RGBA frames to the mp4 video with Microsoft Media Foundation. But, probably because a frame rate and the duration are not correct, a result video plays quickly than actual time. 

How should I fix the problem that not reflect a correct play time according to the frame rate?

IMFMediaType *mediaout = NULL; MFCreateMediaType(&mediaout); mediaout->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video); mediaout->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_H264); mediaout->SetUINT32(MF_MT_AVG_BITRATE, vbitrate); mediaout->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlace_Progressive); MFSetAttributeSize(mediaout, MF_MT_FRAME_SIZE, owidth, oheight); MFSetAttributeRatio(mediaout, MF_MT_FRAME_RATE, fps, 1); // ex) fps=15,60 MFSetAttributeRatio(mediaout, MF_MT_PIXEL_ASPECT_RATIO, 1, 1); HRESULT hr = writer->AddStream(mediaout, &vindex); MFCreateMediaType(&mediain); mediain->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video); mediain->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_ARGB32); mediain->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlace_Progressive); MFSetAttributeSize(mediain, MF_MT_FRAME_SIZE, width, height); //MFSetAttributeRatio(mediain, MF_MT_FRAME_RATE, fps, 1); MFSetAttributeRatio(mediain, MF_MT_PIXEL_ASPECT_RATIO, 1, 1); hr = writer->SetInputMediaType(vindex, mediain, NULL);

MFFrameRateToAverageTimePerFrame(fps, 1, &duration);

DWORD EncodeVideo() { HRESULT hr; ULONGLONG wait = 1000 / fps; while (true){ if (exit == true) break; ULONGLONG st = GetTickCount64(); int stride = width * 4; IMFMediaBuffer *buffer; int buflen = height * width * 4; hr = MFCreateMemoryBuffer(buflen, &buffer); IMFSample *sample; MFCreateSample(&sample); sample->AddBuffer(buffer); UpdateFrame(); BYTE *p; buffer->Lock(&p, NULL, NULL); hr = MFCopyImage(p, stride, GetBits(), stride, stride, height); buffer->Unlock(); buffer->SetCurrentLength(buflen); sample->SetSampleTime(nvtime); sample->SetSampleDuration(duration); hr = writer->WriteSample(vindex, sample); if(SUCCEEDED(hr)) nvtime += duration; sample->Release(); buffer->Release(); ULONGLONG et = GetTickCount64() - st; if (wait > et){ Sleep((DWORD)(wait - et)); } } return S_OK; }



media foundation decoder MFT for DV, MPEG1 and MPEG2

$
0
0

Hi,

could you please tell me where I can find the correct decoder MFT for the DV codec as well as for the MPEG1 and MPEG2 codec?

At the moment I can only find a suitable decoder MFT for H.264 and MPEG-4 - CMSH264DecoderMFT and CMpeg4sDecMFT.

HRESULT hr = CoCreateInstance (__uuidof(CMpeg4sDecMFT), NULL,
CLSCTX_INPROC_SERVER, IID_PPV_ARGS (&m_decoder));

Unforatunately there`s no specific information on the web (https://msdn.microsoft.com/en-us/library/windows/desktop/hh162909%28v=vs.85%29.aspx).

best regards

saoirse

0xC00D36C4 MF_MEDIA_ENGINE_ERR_SRC_NOT_SUPPORTED for AVI that should work

$
0
0

(Hello,

I have been using Windows Media Foundation for some time and it works well. I can open and read many video without problems.

But recently, I tried to open a sequence which fails. This is an AVI with the following streams:
Video : WVC1 1712x1280, 4:2:0 YUV
Audio : PCM S16 LE 8000Hz, 16bps

I am under Windows 7. Windows Media Player opens it perfectly, as well as VLC.
But when I try to open it myself with such methods as IMFSourceResolver::CreateObjectFromURL or MFCreateSourceReaderFromURL, I get an HRESULT 0xC00D36C4 (MF_MEDIA_ENGINE_ERR_SRC_NOT_SUPPORTED)

What could be the problem ? 

[Edit]
Without sucess, I tried the MF_RESOLUTION_CONTENT_DOES_NOT_HAVE_TO_MATCH_EXTENSION_OR_MIME_TYPE flag


decode h264 frames encoded by Android MediaCode H264 encoder.

$
0
0

I am trying to decode  H264 encoded frames I receive from an android device.

The frames are encoded using the MediaCodec APIs on the android device and the output of the encoder is  streamed to a windows pc.

However, I keep getting "More input samples are required to produce output" error when I try to get output by calling the ProcessOutput function.

I was wondering if there is some format inconsistency between the what i get from the android device and what i feed the Media Foundation H264 decoder.

I have verified that every sample that i feed to the MF decoder is a valid NAL unit(starts with 0x00 0x00 0x00 0x01.

I am able to play the streamed data with VLC after saving it to a file.

I can also upload the streamed data as a file if required.

Sample index to timestamp

$
0
0

Hello,

QuickTime has functions like MediaDisplayTimeToSampleNum() and SampleNumToMediaDecodeTime(), that can be used to quickly convert a sample index to a media timestamp.

With the Media foundation framework, I would like to know the sample count and the timestamp of each sample in a video file.
I could find no service or presentation attribute giving me access to such sample information.

The remaining solution is to preprocess a media source (without any decoder set) with multiple calls to ReadSamples() and fill myself a timestamp array. Is it the only solution ?

Moreover, ReadSample() does not accept a NULL [out]ppSample argument, which in this case makes me wonder about the performance, because I only need the timestamp, not the sample data. Is there a way to ensure maximal performance by skipping sample data ?

Regards,

Pierre Chatelier


[Microsoft Media Foundation]Requirements which enable MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS parameter

$
0
0

I am trying an encoding program on Windows 8.1/Visual C++ 2013 by referring this page. I tried various ways in the combination of MFVideoFormat_H264 and MFAudioFormat_AAC with MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS parameter. However, when IMFSinkWriter::WriteSample is called, it consumes many CPU resources. 

That seems hardly processing hardware. Isn't this function supported yet in Windows 8, or is it a pre-processing required for this program? 

IMFAttributes *spattr = NULL;
MFCreateAttributes(&spattr, 10);
spattr->SetUINT32(MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS, TRUE);
//spattr->SetUINT32(MF_READWRITE_DISABLE_CONVERTERS, FALSE);
spattr->SetUINT32(MF_LOW_LATENCY, TRUE);
spattr->SetGUID(MF_TRANSCODE_CONTAINERTYPE, MFTranscodeContainerType_MPEG4);

hr = MFCreateSinkWriterFromURL(path, NULL, spattr, &writer);


IMFMediaSession::Start method stops working on second intance

$
0
0

I have created a test app modeled on the playlist example in the vista sdk. I modified it a bit to play video and loop with the skip function. I then set it up to create two windows with a player in each. Now with one window it plays just fine and loops as expected. When I add a second window with another playlist something strange begins to happen. The first window plays and loops just fine, but when it comes to the second window, when it gets to the end of the sequencer source, it triggers the MEEndOfPresentation event. The code is set to call the skip function, which it executes fine. Then the player just goes black. It must still be playing or something, because if I wait the player will start playing again. If I wait even longer it will go black again. Now the whole time the first window is running fine.

I have built this test app with the following example

https://msdn.microsoft.com/en-us/library/ms697285(v=vs.85).aspx

I could post the code for the test app, but it is quite a bit. I am hoping someone might have a clue to what the problem is before I fill this thread with code. I would appreciate it if someone could help me work through this problem.

HRESULT

CPlayer::Skip (constMFSequencerElementIdSegmentID)

{

   

TRACE((L"\nCPlayer::Skip"));

   

HRESULThr =S_OK;

   

PROPVARIANTvar;

    PropVariantInit (&var);

  

//this->m_pMediaSession->Stop();


    hr = MFCreateSequencerSegmentOffset(

 SegmentID,NULL, &var);

   

   

if(SUCCEEDED(hr))

    {

        hr = m_pMediaSession->Start(&MF_TIME_FORMAT_SEGMENT_OFFSET, &var);  // this call seems to stop working.

       

LOG_IF_FAILED(L"IMFMediaSession::Start from skip", hr);

    }

    PropVariantClear(&var);

   

returnhr;

}

Thank You


Using MF for Blu-Ray 3D support on a WPF Application

$
0
0

Hi!

Is it possible to use MediaFoundation to integrate Blu-Ray 3D playback support into a WPF Application? What would be the requirements for it?

I've looked at WPFMediaKit and MFNET without much success. Are there any other alternatives?

Best Regards,

Take photo from image stream using capture engine technique in media foundation

$
0
0

Hi,

I am beginner for media foundation.I have to develop Win32 desktop application using capture engine technique in media foundation.

I have to implement the following features:1)Show video streaming 2)Capture video 3)Capture photo from still-image stream.These features are implemented in capture engine.

I am able to take photo from video stream not from image stream.I tried to configure the image stream index in Addstream() api,but its giving MF_CAPTURE_ENGINE_ERROR error.

To trigger the still pin,use the IAMVideoControl::SetMode method in directshow. How do i implement this feature using capture engine technique in MF?My question is-is it possible do it in Media foundtion??I have searched many sites but no luck.

Here the sample code which i used to capture an image.

HRESULT TakePhoto()
{
	HRESULT hr = m_pEngine->GetSink(MF_CAPTURE_ENGINE_SINK_TYPE_PHOTO, &pSink);
    if (FAILED(hr))
    {
        goto done;
    }

    hr = pSink->QueryInterface(IID_PPV_ARGS(&pPhoto));
    if (FAILED(hr))
    {
        goto done;
    }

    hr = m_pEngine->GetSource(&pSource);
    if (FAILED(hr))
    {
        goto done;
    }

	hr = pSource->GetCurrentDeviceMediaType(1, &pMediaType);     	// 1 is Image stream index.I will get current image stream media type here.
    if (FAILED(hr))
    {
        goto done;
    }

    //Configure the photo format
    hr = CreatePhotoMediaType(pMediaType, &pMediaType2,GUID_ContainerFormatBmp);
    if (FAILED(hr))
    {
        goto done;
    }

    hr = pPhoto->RemoveAllStreams();
    if (FAILED(hr))
    {
        goto done;
    }

    DWORD dwSinkStreamIndex;
    // Try to connect the first still image stream to the photo sink
    if(bHasPhotoStream)
    {
		hr = pPhoto->AddStream((DWORD)MF_CAPTURE_ENGINE_PREFERRED_SOURCE_STREAM_FOR_PHOTO,  pMediaType2, NULL, &dwSinkStreamIndex);			//Instead of MF_CAPTURE_ENGINE_PREFERRED_SOURCE_STREAM_FOR_PHOTO,i gave index as 1.i am getting error
		if(FAILED(hr))
		{
			goto done;
		}
    }

    hr = pPhoto->SetOutputFileName(pszFileName);
    if (FAILED(hr))
    {
        goto done;
    }

    hr = m_pEngine->TakePhoto();
    if (FAILED(hr))
    {
        goto done;
    }
    return hr;
}

HRESULT OnCaptureEvent(WPARAM wParam, LPARAM lParam)
{
    GUID guidType;
    HRESULT hrStatus;

    IMFMediaEvent *pEvent = reinterpret_cast<IMFMediaEvent*>(wParam);

    hr = pEvent->GetExtendedType(&guidType);
    if (SUCCEEDED(hr))
    {
		if (guidType == MF_CAPTURE_ENGINE_ERROR) 			//i got this error if i give dwSourceStreamIndex as '1' in Addstresm api
        {
            DestroyCaptureEngine();
        }
    }

    pEvent->Release();
    return hrStatus;
}

Please help me to solve this problem.Past one week,I am working on this issue and i couldnt find the solution.Please give me a some idea or some sample code to solve this problem.

Thanks in advance.

Regards,

Ambika


[Microsoft Media Foundation]Problem of asynchronous custom media source

$
0
0

I asked about MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS previously and I got an answer. 

  1. My hardware doesn't support it. 
  2. It requires asynchronous.

Based on these problems, I rewrote the program with referring to wavsink sample on Win7 SDK. However, it seems not to process by hardware for the moment. 

I uploaded the VS2013 full project(It generates a 5 seconds blank movie) to OneDrive. I want to know where is the problem.

#include <Windows.h>

#include "MFImage.h"
#pragma comment(lib, "Mf.lib")
#pragma comment(lib, "mfreadwrite")
#pragma comment(lib, "mfplat")
#pragma comment(lib, "mfplay")
#pragma comment(lib, "mfuuid")
#pragma comment(lib, "Shlwapi")

void main()
{
	CoInitializeEx(NULL, 0);
	MFStartup(MF_VERSION);

	HRESULT hr;

	MFT_REGISTER_TYPE_INFO codec;
	codec.guidMajorType = MFMediaType_Video;
	codec.guidSubtype = MFVideoFormat_H264;

	IMFActivate** activate_array = 0;
	IMFTransform* encoder = 0;
	UINT32 count = 0;
	hr = MFTEnumEx(MFT_CATEGORY_VIDEO_ENCODER,
		MFT_ENUM_FLAG_ASYNCMFT | MFT_ENUM_FLAG_HARDWARE | MFT_ENUM_FLAG_SORTANDFILTER,
		NULL, &codec, &activate_array, &count);
	activate_array[0]->ActivateObject(__uuidof(IMFTransform), (LPVOID*)&encoder);


	MFAsyncReader *asr = NULL;
	IMFAttributes *spattr = NULL;
	IMFSourceReader *reader = NULL;
	MFImageSource *isource = NULL;
	IMFMediaType *type = NULL;

	encoder->GetAttributes(&spattr);

	spattr->SetUINT32(MF_TRANSFORM_ASYNC, TRUE);
	spattr->SetUINT32(MF_TRANSFORM_ASYNC_UNLOCK, FALSE);
	spattr->SetUINT32(MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS, TRUE);
	spattr->SetUINT32(MF_LOW_LATENCY, TRUE);
	spattr->SetGUID(MF_TRANSCODE_CONTAINERTYPE, MFTranscodeContainerType_MPEG4);

	IMFSinkWriter *writer = NULL;
	hr = MFCreateSinkWriterFromURL(L"C:\\data\\test.mp4", NULL, spattr, &writer);
	RELEASE(spattr);
	if(FAILED(hr)) goto EXIT;

	DWORD vindex;

	IMFMediaType *mediaout = NULL;
	MFCreateMediaType(&mediaout);
	mediaout->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video);
	mediaout->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_H264);
	mediaout->SetUINT32(MF_MT_AVG_BITRATE, 1024 *1024);
	mediaout->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlace_Progressive);
	MFSetAttributeSize(mediaout, MF_MT_FRAME_SIZE, 640, 480);
	MFSetAttributeRatio(mediaout, MF_MT_FRAME_RATE, 30, 1);
	MFSetAttributeRatio(mediaout, MF_MT_PIXEL_ASPECT_RATIO, 1, 1);
	hr = writer->AddStream(mediaout, &vindex);
	RELEASE(mediaout);

	asr = new MFAsyncReader(writer);
	IMFAttributes *cpattr = NULL;
	hr = MFCreateAttributes(&cpattr, 1);
	if (SUCCEEDED(hr)){
		hr = cpattr->SetUnknown(MF_SOURCE_READER_ASYNC_CALLBACK, asr);
	}

	if (SUCCEEDED(hr)){
		isource = new MFImageSource(hr);
		hr = MFCreateSourceReaderFromMediaSource(isource, cpattr, &reader);
	}
	RELEASE(cpattr);

	hr = reader->GetCurrentMediaType(MF_SOURCE_READER_FIRST_VIDEO_STREAM, &type);
	hr = writer->SetInputMediaType(vindex, type, NULL);
	RELEASE(type);


	asr->SetReader(reader);

	hr = writer->BeginWriting();
	hr = reader->ReadSample(MF_SOURCE_READER_FIRST_VIDEO_STREAM, 0, NULL, NULL, NULL, NULL);

	Sleep(5000);

	writer->Finalize();
	isource->Shutdown();

EXIT:
	RELEASE(asr);

	RELEASE(writer);
	RELEASE(reader);
	RELEASE(isource);

	MFShutdown();
	CoUninitialize();

}

WaveOut is causing ducking/audio attenuation to occur unintentionally.

$
0
0

Hi there,

I've encountered a strange issue with the waveOut API's and I'm not sure if its by design or if I'm missing something.

Basically If I open a device specifying that the default communication device is to be used, that device will always cause ducking/stream attenuation to occur for all future uses of said device. Even when I no longer specify that the default communications device should be used.

I've found some crude sample code to try and explain:

#include <windows.h> #include <mmsystem.h> #include <stdio.h> typedef struct wavFileHeader { long chunkId; //"RIFF" (0x52,0x49,0x46,0x46) long chunkSize; // (fileSize - 8) - could also be thought of as bytes of data in file following this field (bytesRemaining) long riffType; // "WAVE" (0x57415645) }; typedef struct fmtChunk { long chunkId; // "fmt " - (0x666D7420) long chunkDataSize; // 16 + extra format bytes short compressionCode; // 1 - 65535 short numChannels; // 1 - 65535 long sampleRate; // 1 - 0xFFFFFFFF long avgBytesPerSec; // 1 - 0xFFFFFFFF short blockAlign; // 1 - 65535 short significantBitsPerSample; // 2 - 65535 short extraFormatBytes; // 0 - 65535 }; typedef struct wavChunk { long chunkId; long chunkDataSize; }; char *readFileData(char *szFilename, long &dataLengthOut) { FILE *fp = fopen(szFilename, "rb"); long len; char *buffer; fseek(fp, 0, SEEK_END); len = ftell(fp); fseek(fp, 0, SEEK_SET); buffer = (char*)calloc(1, len + 1); fread(buffer, 1, len, fp); fclose(fp); dataLengthOut = len; return buffer; } void parseWav(char *data, bool playDefaultComm) { long *mPtr; char *tmpPtr; char *buffer; WAVEFORMATEX wf; volatile WAVEHDR wh; HWAVEOUT hWaveOut; fmtChunk mFmtChunk; wavChunk mDataChunk; mPtr = (long*)data; if (mPtr[0] == 0x46464952) // little endian check for 'RIFF' { mPtr += 3; if (mPtr[0] == 0x20746D66) // little endian for "fmt " { // printf("Format chunk found\n"); tmpPtr = (char*)mPtr; memcpy(&mFmtChunk, tmpPtr, sizeof(mFmtChunk)); tmpPtr += 8; tmpPtr += mFmtChunk.chunkDataSize; mPtr = (long*)tmpPtr; if (mPtr[0] == 0x61746164) // little endian for "data" { // printf("Data chunk found\n"); tmpPtr = (char*)mPtr; memcpy(&mDataChunk, tmpPtr, sizeof(mDataChunk)); mPtr += 2; buffer = (char*)malloc(mDataChunk.chunkDataSize); memcpy(buffer, mPtr, mDataChunk.chunkDataSize); printf("sampleRate: %d\n", mFmtChunk.sampleRate); wf.wFormatTag = mFmtChunk.compressionCode; wf.nChannels = mFmtChunk.numChannels; wf.nSamplesPerSec = mFmtChunk.sampleRate; wf.nAvgBytesPerSec = mFmtChunk.avgBytesPerSec; wf.nBlockAlign = mFmtChunk.blockAlign; wf.wBitsPerSample = mFmtChunk.significantBitsPerSample; wf.cbSize = mFmtChunk.extraFormatBytes; wh.lpData = buffer; wh.dwBufferLength = mDataChunk.chunkDataSize; wh.dwFlags = 0; wh.dwLoops = 0; if (playDefaultComm) { waveOutOpen(&hWaveOut, WAVE_MAPPER, &wf, 0, 0, WAVE_MAPPED_DEFAULT_COMMUNICATION_DEVICE); } else { //Make sure the device ID is set to your default communications device. waveOutOpen(&hWaveOut, 1, &wf, 0, 0, CALLBACK_NULL); } waveOutPrepareHeader(hWaveOut, (wavehdr_tag*)&wh, sizeof(wh)); waveOutWrite(hWaveOut, (wavehdr_tag*)&wh, sizeof(wh)); do {} while (!(wh.dwFlags & WHDR_DONE)); waveOutUnprepareHeader(hWaveOut, (wavehdr_tag*)&wh, sizeof(wh)); waveOutClose(hWaveOut); free(buffer); } } } else printf("INvalid WAV\n"); } int main() { //choose your file to play char *filename = "c:/windows/media/tada.wav"; char *buffer; long fileSize; buffer = readFileData(filename, fileSize); //play the .wav using the default communications device.

//Ducks audio as expected parseWav(buffer, true); //play the .wav using the specified device. (continues to cause ducking for some reason) parseWav(buffer, false); free(buffer); return 0; }

If I call the parseWav(buffer,false) first, ducking will not occur.

Its only if I attempt to specify the default communications device that it continues to duck no matter what.

Thanks for your help,

Cathal

Edit:

To clarify what I'm trying to achieve:

I want to play two separate files on the same device.

One will cause ducking, the other will not.

GUI Tool for MF Trace Analysis

$
0
0

In 2010 article Automating Trace Analysis there was a method described to visualize MFTrace output. There was a link provided to MSDN Code Gallery for corresponding Perl scripts, which doesn't work anymore. Can someone give an updated link to these scripts?

Can you suggest a GUI based tool to analyze topology of samples processing from a webcam in WMF similar to GraphEdit for DirectShow? Is there any progress in such tool development by MS, or known open source tools to do that?


Using Sink Writer to remux VC1 into WMV container

$
0
0

I have configured Sink Writer to store VC1 Elementary Stream with TS into WMV9 format file. And I've experienced with a problem.

The SinkWriter returns OK HRESULTS but is not produces any output if Input media type haven't configured . If I uncomment the line from 2nd codeblock than SetInputMediaTypefailed with error code 0xc00d36b4

// Add input stream to the SinkWriter //CHECK_HR(spSinkWriter->SetInputMediaType(StreamIndex, spMFTypeIn, NULL));

I suppose that SinkWriter can receive one or several packets with ES VC1 frame by frame with Time Stamps supplied and store them into WMV9 container file. Is it correct assumption?

My application is based on example Using SinkWriter

and pefrorms MF initialization:

	// Init MF
	HRESULT hr = S_OK;
	CHECK_HR(CoInitialize(NULL));
	CHECK_HR(MFStartup(MF_VERSION));

SinkWriter instance:

#define CHECK_HR(_hr) { hr = (_hr); if (FAILED(hr)) { wprintf( L"'" L#_hr L"' failed with error code 0x%08lx\n", hr ); goto ExitOnError; } } ... HRESULT hr = S_OK; pMFAttributes = NULL; CHECK_HR(MFCreateAttributes(&pMFAttributes, 3)); CHECK_HR(pMFAttributes->SetUINT32(MF_TRANSCODE_DONOT_INSERT_ENCODER, 1u)); // Create SinkWriter CHECK_HR(MFCreateSinkWriterFromURL(f_OutName, NULL, pMFAttributes, &spSinkWriter)); // // Setup the output media type // CHECK_HR(MFCreateMediaType(&spMFTypeOut)); CHECK_HR(spMFTypeOut->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video)); CHECK_HR(spMFTypeOut->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_WMV3)); // Add output stream to the SinkWriter CHECK_HR(spSinkWriter->AddStream(spMFTypeOut, &StreamIndex)); // // Setup the input media type // CHECK_HR(MFCreateMediaType(&spMFTypeIn)); CHECK_HR(spMFTypeIn->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video)); CHECK_HR(spMFTypeIn->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_WVC1)); // Add input stream to the SinkWriter //CHECK_HR(spSinkWriter->SetInputMediaType(StreamIndex, spMFTypeIn, NULL)); // // Start encoding // CHECK_HR(spSinkWriter->BeginWriting()); MFSinkWriterStarted = TRUE; ExitOnError: return hr;

...

This section writes frame

CComPtr<IMFSample> spSample; CComPtr<IMFMediaBuffer> spBuffer[100]; BYTE *pbBuffer = NULL; // // Create a media sample // CHECK_HR(MFCreateSample(&spSample)); CHECK_HR(spSample->SetSampleDuration(hnsSampleDuration)); CHECK_HR(spSample->SetSampleTime(hnsSampleTime)); hnsSampleTime += hnsSampleDuration; ...

// Pop frame to output
for (unsigned i = 0; i < packets.size(); i++)

{

... // // Add a media buffer filled with random data // spBuffer[i] = NULL; CHECK_HR(MFCreateMemoryBuffer(num_write, &spBuffer[i])); CHECK_HR(spBuffer[i]->SetCurrentLength(num_write)); CHECK_HR(spBuffer[i]->Lock(&pbBuffer, NULL, NULL)); BYTE *data = packets[i].pData + offset; for (DWORD n = 0; n < num_write; n++) { pbBuffer[n] = data[n]; } CHECK_HR(spBuffer[i]->Unlock()); CHECK_HR(spSample->AddBuffer(spBuffer[i])); ... } // // Write the media sample // CHECK_HR(spSinkWriter->WriteSample(StreamIndex, spSample));

And, finally, at the end of remuxing:

	if (MFSinkWriterStarted)
	{
		wprintf(L"Finalizing stream %d\n", StreamID);
		CHECK_HR(spSinkWriter->Flush(StreamIndex));
		CHECK_HR(spSinkWriter->Finalize());
	}

ExitOnError:
	return;

Could you pleased to give me corrections or right way to solve this task?

I will be very appreciated for any suggestions regarding this use case.



The server is sending too much data error

$
0
0
Hi everyone,
I'm having a little issue with my media player playing a mms stream
from the windows media server.
The stream plays for more than half but then it stops and I get the
error 'The server is sending too much data'

Any help would be appreciated

MFMediaEngine - not possible to set specific audio endpoint ?

$
0
0

Hi. We currently using MFMediaEngine in our application and it seems works very good and can do everything that we need. But recently we got requirement to set audio output device and find out that there no way to set audio endpoint id. Can you please help, maybe we missed something?

2 MS: If it is really not available - do you plan to add this function to MFMediaEngine? This is very basic function that is probably should be there already. For example, audio output device id can be set as attribute when creating MFMediaEngine. This will be good enough for us already.

Please help in understanding some basics

$
0
0

I'm very green when it comes to the MMF, so please bear with me. 

What I'm trying to do is create a simple "Hello World" style Media Source. The purpose is to simply make as thin an implementation as possible so that I can build atop it. As I've started taking a look at all the major pieces of the pipeline, I've noted the existence of Media Sources and Media Transforms and Media Sinks. 

For my simple demo I want my Media Sink -- for now -- to just be Windows Media Player (that is whatever renderer already exists inside of WMP). I want to come up with some fake file extension (.fake or whatever), and register a Media Source with it. I then would like my Media Source to simply send white noise to the media player (I don't care about actually reading a file for now). 

I really think this would be a great starting place for me. But I've got a hitch: I cannot seem to create a stream with a media type of uncompressed RGB that WMP will recognize (it keeps complaining about it being an unrecognized codec). I try to set everything up in CreatePresentationDescriptor as stated here: https://msdn.microsoft.com/en-us/library/windows/desktop/ff485865(v=vs.85).aspx, but setting the sub media type to RGB24 doesn't work. 

Is this because there isn't a registered Media Transform for dealing with RGB24? I see Windows has a native MJPG decoder...should I generate my white noise as jpegs and set MJEG as my MF_MT_SUBTYPE?

Registering a ByteStreamHandler for Internet Explorer?

$
0
0

Ultimately my goal is to get my own custom media type to play within the HTML5 <video> tag from Internet Explorer. 

I have a byte stream handler, source and stream objects and they're working together (I know this because I can get my custom file type to play in Windows Media Player now). I registered my byte stream handler for the .ocv file extension (the file extension I'm using), and I created a mime type registry entry as well for "video/ocv" (pointing to the same byte stream handler). 

The problem is Internet Explorer doesn't seem to really care. I create a HTML page, reference my custom media type in the <video> tag, and get an "Invalid Source" error in the HTML5 player (IIS has been setup to send the "video/ocv" MIME type). Using Process Monitor (from SysInternals) I can see IE isn't even looking in the registry to attempt to resolve my custom type. Replacing my media file in the <Video> tag with an existing MP4 file everything works (and I can see IE checking the registry for the MP4 byte stream handler). 

So what gives? Is there something else I need to do for IE to recognize my custom media type and load my byte stream handler? 


Best performance/rendering method for Media Foundation in WPF

$
0
0

I have a CLI/C++ component that implements a Media Foundation player. Currently I am creating a Systems.Windows.Forms.PictureBox in XAML and passing the PictureBox handle to IMFVideoDisplayControl::SetVideoWindow().

<WindowsFormsHost><wf:PictureBox x:Name="clippingWindow" /></WindowsFormsHost>

However the performance seems okay on some machines, but on others it's pretty poor (frame rate issues) compared to playing the same (mp4) video on Windows Media Player.

What's the best practice/recommended method to render video using Media Foundation in a WPF application? Is there a standard implementation code example I can follow?


Viewing all 1079 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>