I have used Microsoft Media transform library to record video from camera.
This saves the file to mp4/wmv format. But now I need to record video and get raw stream so that I can encode it to ismv as done by Expression Encoder
I have used Microsoft Media transform library to record video from camera.
This saves the file to mp4/wmv format. But now I need to record video and get raw stream so that I can encode it to ismv as done by Expression Encoder
Hi,
I am creating a Audio MFT. My requirement is not to render the Audio data passing from MFT to SAR.
I want to dump the data inside MFT only.
How can i do so?
If I dont pass the data to SAR, next time onwards ProcessInput and Processoutput does not get called.
Please suggest the proper way.
Best Regards,
Sharad
Hi ,
My requirement is not play the Audio data only video should get play.
If I dont pass the Audio data from Mine custom Audio MFT and pass only the Video data
from custom Video MFT, the call to Processoutput method in Video MFT get stopped.
How I can achieve the same scenario?
Best Regards,
Sharad
We are experiencing a problem where the active x control will randomly wait 30 seconds before actually playing a wav file from an http resource. In the web page we are able to talk to the active x control, and get information from it, but it does not start playing until about 30 seconds after we told it to play.
In order to get more information about what might be happening we would like to see some logging. Is there any way to enable some logging and see what is going on in the windows media player active x control?
My apologies if this is not the correct place to post this question.
Hi guys,
I am writing a video processing application. I am utilizing a media session. My pipeline consists of source reader, decoder MFT (used by MfEnum, I query the MFTs capable of outputting one of my 4 desired video formats), Color Converter DSP (to convert YUY2, NV12, YV12 outputs of the decoder into RGB32), and my custom sink. My custom sink writes the samples of with no issues. I am planning on adding a variation to my pipeline. I want to inject an H264 encoder between the color converter and the sink. In this scenario my custom sink will be replaced by the built in MPEG4 sink.
Having said above, my original pipeline without the encoder and MPEG4 sink works fine.
I query the system for an H264 encoder with YUY2 input and H264 output and it returns one match. I create an instance and try to set up the input and output types. SetOutputType fails. MFTrace does not give me much except it fails.
I tried both using an already setup Media Type and creating the media type from scratch. Per the documentation I have filled up all of the pieces. I am setting the output first and then the input. Only thing is that I am doing this on Windows 8. I need to setup my VM to check on windows 7.
Here is the sample code.
HRESULT MediaFoundationManager::FindEncoder(IMFTransform **decoder, IMFMediaType *type)
{
HRESULT hr = S_OK;
UINT32 count = 0;
CLSID *ppCLSIDs = NULL;
MFT_REGISTER_TYPE_INFO info = { 0 };
info.guidMajorType = MFMediaType_Video;
info.guidSubtype = MFVideoFormat_YUY2;
MFT_REGISTER_TYPE_INFO outInfo = {0};
outInfo.guidMajorType = MFMediaType_Video;
outInfo.guidSubtype = MFVideoFormat_H264;
hr = MFTEnum( MFT_CATEGORY_VIDEO_ENCODER,
0, // Reserved
&info, // Input type
&outInfo, // Output type
NULL, // Reserved
&ppCLSIDs,
&count
);
if (SUCCEEDED(hr) && count == 0)
hr = MF_E_TOPO_CODEC_NOT_FOUND;
if (SUCCEEDED(hr))
hr = CoCreateInstance(ppCLSIDs[0], NULL, CLSCTX_ALL, IID_PPV_ARGS(decoder));
if (SUCCEEDED(hr))
{
ConfigureMFTFromScratch(*decoder,info.guidSubtype,outInfo.guidSubtype, true);
}
}
CoTaskMemFree(ppCLSIDs);
return hr;
}
void MediaFoundationManager::ConfigureMFTFromScratch(IMFTransform *transform, GUID inputFormat, GUID outputFormat, bool outputFirst)
{
CComPtr<IMFMediaType> inputMediaType = NULL;
CComPtr<IMFMediaType> outputMediaType = NULL;
Helper::CheckHR(MFCreateMediaType(&inputMediaType),"Create Media Type");
Helper::CheckHR(MFCreateMediaType(&outputMediaType),"Create Media Type");
Helper::CheckHR(inputMediaType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video),"Set major type");
Helper::CheckHR(inputMediaType->SetGUID(MF_MT_SUBTYPE, inputFormat),"Set Sub type");
Helper::CheckHR(MFSetAttributeSize(inputMediaType, MF_MT_FRAME_SIZE,_inputInfo.FrameWidth, _inputInfo.FrameHeight),"Set Frame Size");
Helper::CheckHR(inputMediaType->SetUINT64(MF_MT_FRAME_RATE, _inputInfo.FrameRate),"Set Frame rate");
Helper::CheckHR(inputMediaType->SetUINT32(MF_MT_AVG_BITRATE, _inputInfo.BitRate),"Set Bit rate");
Helper::CheckHR(inputMediaType->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlaceMode::MFVideoInterlace_Progressive),"Set Interlace Mode");
Helper::CheckHR(MFSetAttributeRatio(inputMediaType, MF_MT_PIXEL_ASPECT_RATIO, 1, 1),"Set Aspect Ratio");
Helper::CheckHR(outputMediaType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video),"Set major type");
Helper::CheckHR(outputMediaType->SetGUID(MF_MT_SUBTYPE, outputFormat),"Set Sub type");
Helper::CheckHR(MFSetAttributeSize(outputMediaType, MF_MT_FRAME_SIZE,_inputInfo.FrameWidth, _inputInfo.FrameHeight),"Set Frame Size");
Helper::CheckHR(outputMediaType->SetUINT64(MF_MT_FRAME_RATE, _inputInfo.FrameRate),"Set Frame rate");
Helper::CheckHR(outputMediaType->SetUINT32(MF_MT_AVG_BITRATE, _inputInfo.BitRate),"Set Bit rate");
Helper::CheckHR(outputMediaType->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlaceMode::MFVideoInterlace_Progressive),"Set Interlace Mode");
Helper::CheckHR(MFSetAttributeRatio(outputMediaType, MF_MT_PIXEL_ASPECT_RATIO, 1, 1),"Set Aspect Ratio");
if (outputFirst)
{
UINT32 level = -1;
outputMediaType->SetUINT32(MF_MT_MPEG2_LEVEL, level);
Helper::CheckHR(outputMediaType->SetUINT32(MF_MT_MPEG2_PROFILE , eAVEncH264VProfile_Main),"Set Profile Mode");
Helper::CheckHR(transform->SetOutputType(0, outputMediaType, 0),"Set output type");
Helper::CheckHR(transform->SetInputType(0, inputMediaType, 0),"Set input type");
}
else
{
Helper::CheckHR(transform->SetInputType(0, inputMediaType, 0),"Set input type");
Helper::CheckHR(transform->SetOutputType(0, outputMediaType, 0),"Set output type");
}
}
HRESULT 80004005 is thrown at Helper::CheckHR(transform->SetOutputType(0, outputMediaType, 0),"Set output type");
Any ideas?
Hi guys,
I posted a question on another forum before finding this place. Here is the link.
http://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/5951c5dc-a7e4-44f3-a6d8-862e0826f0e5/h264-setoutputtype-returns-efail-error?forum=windowsgeneraldevelopmentissues
Here is the text from the question:
I am writing a video processing application. I am utilizing a media session. My pipeline consists of source reader, decoder MFT (done by MfEnum, I query the MFTs capable of outputting one of my 4 desired video formats), Color Converter DSP (to convert YUY2, NV12, YV12 outputs of the decoder into RGB32), and my custom sink. My custom sink writes the samples of with no issues. I am planning on adding a variation to my pipeline. I want to inject an H264 encoder between the color converter and the sink. In this scenario my custom sink will be replaced by the built in MPEG4 sink.
Having said above, my original pipeline works fine without the H264 encoder and MPEG4 sink.
I query the system for an H264 encoder with YUY2 input and H264 output and it returns one match. I create an instance and try to set up the input and output types. SetOutputType fails. MFTrace does not give me much except it fails.
I tried both using an already setup Media Type and creating the media type from scratch. Per the documentation I have filled up all of the pieces. I am setting the output first and then the input. Only thing is that I am doing this on Windows 8. I need to setup my VM to check on windows 7.
Here is the sample code.
HRESULT MediaFoundationManager::FindEncoder(IMFTransform **decoder, IMFMediaType *type)
{
HRESULT hr = S_OK;
UINT32 count = 0;
CLSID *ppCLSIDs = NULL;
MFT_REGISTER_TYPE_INFO info = { 0 };
info.guidMajorType = MFMediaType_Video;
info.guidSubtype = MFVideoFormat_YUY2;
MFT_REGISTER_TYPE_INFO outInfo = {0};
outInfo.guidMajorType = MFMediaType_Video;
outInfo.guidSubtype = MFVideoFormat_H264;
hr = MFTEnum( MFT_CATEGORY_VIDEO_ENCODER,
0, // Reserved
&info, // Input type
&outInfo, // Output type
NULL, // Reserved
&ppCLSIDs,
&count
);
if (SUCCEEDED(hr) && count == 0)
hr = MF_E_TOPO_CODEC_NOT_FOUND;
if (SUCCEEDED(hr))
hr = CoCreateInstance(ppCLSIDs[0], NULL, CLSCTX_ALL, IID_PPV_ARGS(decoder));
if (SUCCEEDED(hr))
{
ConfigureMFTFromScratch(*decoder,info.guidSubtype,outInfo.guidSubtype, true);
}
}
CoTaskMemFree(ppCLSIDs);
return hr;
}
void MediaFoundationManager::ConfigureMFTFromScratch(IMFTransform *transform, GUID inputFormat, GUID outputFormat, bool outputFirst)
{
CComPtr<IMFMediaType> inputMediaType = NULL;
CComPtr<IMFMediaType> outputMediaType = NULL;
Helper::CheckHR(MFCreateMediaType(&inputMediaType),"Create Media Type");
Helper::CheckHR(MFCreateMediaType(&outputMediaType),"Create Media Type");
Helper::CheckHR(inputMediaType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video),"Set major type");
Helper::CheckHR(inputMediaType->SetGUID(MF_MT_SUBTYPE, inputFormat),"Set Sub type");
Helper::CheckHR(MFSetAttributeSize(inputMediaType, MF_MT_FRAME_SIZE,_inputInfo.FrameWidth, _inputInfo.FrameHeight),"Set Frame Size");
Helper::CheckHR(inputMediaType->SetUINT64(MF_MT_FRAME_RATE, _inputInfo.FrameRate),"Set Frame rate");
Helper::CheckHR(inputMediaType->SetUINT32(MF_MT_AVG_BITRATE, _inputInfo.BitRate),"Set Bit rate");
Helper::CheckHR(inputMediaType->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlaceMode::MFVideoInterlace_Progressive),"Set Interlace Mode");
Helper::CheckHR(MFSetAttributeRatio(inputMediaType, MF_MT_PIXEL_ASPECT_RATIO, 1, 1),"Set Aspect Ratio");
Helper::CheckHR(outputMediaType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video),"Set major type");
Helper::CheckHR(outputMediaType->SetGUID(MF_MT_SUBTYPE, outputFormat),"Set Sub type");
Helper::CheckHR(MFSetAttributeSize(outputMediaType, MF_MT_FRAME_SIZE,_inputInfo.FrameWidth, _inputInfo.FrameHeight),"Set Frame Size");
Helper::CheckHR(outputMediaType->SetUINT64(MF_MT_FRAME_RATE, _inputInfo.FrameRate),"Set Frame rate");
Helper::CheckHR(outputMediaType->SetUINT32(MF_MT_AVG_BITRATE, _inputInfo.BitRate),"Set Bit rate");
Helper::CheckHR(outputMediaType->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlaceMode::MFVideoInterlace_Progressive),"Set Interlace Mode");
Helper::CheckHR(MFSetAttributeRatio(outputMediaType, MF_MT_PIXEL_ASPECT_RATIO, 1, 1),"Set Aspect Ratio");
if (outputFirst)
{
UINT32 level = -1;
outputMediaType->SetUINT32(MF_MT_MPEG2_LEVEL, level);
Helper::CheckHR(outputMediaType->SetUINT32(MF_MT_MPEG2_PROFILE , eAVEncH264VProfile_Main),"Set Profile Mode");
Helper::CheckHR(transform->SetOutputType(0, outputMediaType, 0),"Set output type");
Helper::CheckHR(transform->SetInputType(0, inputMediaType, 0),"Set input type");
}
else
{
Helper::CheckHR(transform->SetInputType(0, inputMediaType, 0),"Set input type");
Helper::CheckHR(transform->SetOutputType(0, outputMediaType, 0),"Set output type");
}
}
HRESULT 80004005 is thrown at Helper::CheckHR(transform->SetOutputType(0, outputMediaType, 0),"Set output type");
FYI _inputInfo is a bit bucket that holds the information retrieved from the source reader.
It holds frame size, frame rate, bit rate
Any help is much appreciated.
Any ideas?
(Note: I originally posted this question in the Windows Apps with C++ forum about 3 weeks ago, but received no replies. I realized quickly that this might be a better forum for this, but wanted to let that question run its course before posting again here. So any help you all could provide would be VERY much appreciated.)
I am using a C++/CX component to encode video using IMFSinkWriter, and everything works fine when I encode using the WMV3 codec, but when I try to use the H.264 codec I get E_CHANGED_STATE (0x8000000c: "A concurrent or interleaved operation changed the state of the object, invalidating this operation.") errors. I can't find any reason to believe this is a threading issue, which was my first thought: the component is being called from an async/await C# method, but everything awaitable is being awaited, as far as I can see, and there's no other threading-like behavior going on in the app.
My second thought was that somehow the sink writer's throttling (on by default) was being turned off. But this doesn't seem to be the case. In fact, I tried to explicitly enable throttling, but this didn't have any effect:
spAttr->SetUINT32(MF_SINK_WRITER_DISABLE_THROTTLING, false);
The encoding source is an RGB32 stream, and I am encoding at a 1.5Mbps bitrate, 25 FPS.
The error happens every time I try to encode a video of any real-world length: it'll usually pop up before it gets about 10% of the way through a 5 minute output file. However, it isn't entirely deterministic: sometimes it will get partway through the third (of 58) segments, while other times it will happened on the first.
None of these problems affect WMV encoding. Can anyone offer any suggestions for things to try?
Additional note: I have seen some references that I might need to supply some metadata (a sample description box) with the video, but I can't find any documentation on this that I can understand. As indicated I can encode WMV without getting this error (usually-- I think I've seen it once or twice in the long history of testing this component), and all I would like to do is provide an alternative format for my users. Could the lack of this metadata be causing this, and if so are there any accessible tutorials on constructing it?
I am really hoping somebody answers my question in this forum finally!!
Here is the issue.
I am trying to merge mp4 files into one.
It works fine in Windows 7 but not in Windows 8.
Let me explain.
I create two mp4 files. They have the exact same parameters as far as audio and video is concerned. So, I simply employ MFCopy style code to read samples from two files and write it to new mp4 file.
The two input mp4 files are encoded using standard media foundation writer mechanism.
This works great in Windows 7.
However, in Windows 8, the merged file is blank after the first input video is done.
So, in windows 8, the merged file plays fine till the part where the first video is. Then it is black. Always. It is consistent.
I then looked at meta data info using mediainfo. The only difference in the files was "ReFrames".
On Win 7, mp4 files have ReFrames -> 1
On Win 8, mp4 files have ReFrames -> 2
To add to the twist, I take all the files encoded in Win 8 (the ones which has ReFrames of 2), put it on win 7 system and merge them and it is _fine_
Can anybody please help me. I have sample project and input files I can share. It really is a consistent bug.
I'm creating an application for video conferencing using media foundation and I'm having an issue decoding the H264 video frames I receive over the network.
The Design
Currently my network source queues a token on every request sample, unless there is an available stored sample. If a sample arrives over the network and no token is available the sample is stored in a linked list. Otherwise it is queued with the MEMediaSample event. I also have the decoder set to low latency.
My Issue
When running the topology using my network source I immediately see the first frame rendered to the screen. I then experience a long pause until a live stream begins to play perfectly. After a few seconds the stream appears to pause but then you notice that it's just looping through the same frame over and over again adding in a live frame every couple of seconds that then disappears immediately and goes back to displaying the old loop.
Why is this happening? I'm by no means an expert in H264 or media foundation for that matter but, I've been trying to fix this issue for weeks with no success. I have no idea where the problem might be. Please help me!
I have a windows store app where I use a SinkWriter to write audio and video samples to an mp4 file. I also want to write some metadata like author and title to the file. How do I have to do that?
When creating the SinkWriter I use the MFCreateSinkWriterFromURL function where I pass an IMFAttributes object. Do I have to use this object, and if so, what is the guidKey of author and title?
Thanks in advance!
Ronald
Hi,
How can I create Two Pin MFT(One Audio and One Video).
Is there any sample available for same?
Please help me out.
Best Regards,
Sharad
Hi,
I wrote an MFT which supports h.264 as its input type.
If I play any media file by default it takes Microsoft H.264 decoder.
When I do query using MFTEnumEx function it returns me two MFT, one is microsoft one and second is mine,
Giving preference to Microsoft one. How can I set my MFT as higher preference than Microsoft's MFT one?
Even If I want to change the order of MFTs come in the list returned by MFTEnumEx, how can I do so?
Best Regards,
sharad
Hello,
My application must read one video track and several audio tracks, and be able to specify one section of the file and play it in loop. I have created a setup with Media Foundation, using the sequencer source and creating several topologies with the start and end point of the section I want to loop. It works, except for the fact that there is a 0.5 to 1 sec time of stabilization of the playback just when it goes back to the starting point.
First, I made it with individual audio files and one video file. This was quite bad for some files, sometimes all the files were completely out of sync, sometimes the video was frozen for several seconds, then went very fast to catch with the audio.
I had a good improvement using only one file, that includes the video and the multiple audio tracks. However, for most files, there is still a problem about the smoothness of the transition.
With a poor quality video AVI file, I could make it work smoothly, which would mean that the method I use is correct. I have noticed that the quality of the loop smoothness is strongly related to the CPU used on a file when simply playing it.
I use the "SetTopology" on the session, using a series of topologies, so normally it should preroll the next one during the playback of the current one, right ? Or am I missing something there ?
My app works also on Mac, where I have used a similar setup with AVFoundation, and it works fine with the same media files I use on Windows.
What can I do to have the looping work smoothly with better quality video on Windows ? Is there something to do about it ?
When I play the media file without looping, I notice that when I preroll it to some point, then when I hit the START button, the media starts instantly and with no glitch. Could it work better if I was using two independent simple playback setups, start the first, preroll the second, then stop the first and start the second programmatically at the looping point ?
I have used Microsoft Media transform library to record video from camera.
This saves the file to mp4/wmv format. But now I need to record video and get raw stream so that I can encode it to ismv as done by Expression Encoder
Hi,
We are actually trying the run a simple Reader --> Writer (transcoder, VC1 -> H264) in Media Foundation.
The source data (VC1) is captured with our own equipment, so no "premium protected content" or similar is used, the goal is to use Intel® Quick Sync Video H.264 Encoder MFT.
Looking in the MediaFoundation trace log we can see that a hardware MFT is enumerated and created BUT it fails.
CoCreateInstance @ Created {4BE8D3C0-0515-4A37-AD55-E4BAE19AF471} Intel® Quick Sync Video H.264 Encoder MFT (c:\Program Files\Intel\Media SDK\mfx_mft_h264ve_w7_32.dll)
MFGetMFTMerit @ Merit validation failed for MFT @06A42CA0 (hr=E_FAIL)
We provide a IDirect3DDeviceManager9 pointer MediaFoundation when creating our source reader + sink writer according to the documentation.
It's rather strange that MF wants to use a protected media path. Are we supposed to pass some encoder params to disable this type of behavior? Any ideas?
Standard monitor with DVI cable is used with Windows 7
best regards,
Carl
Hi,
I am creating a Audio MFT. My requirement is not to render the Audio data passing from MFT to SAR.
I want to dump the data inside MFT only.
How can i do so?
If I dont pass the data to SAR, next time onwards ProcessInput and Processoutput does not get called.
Please suggest the proper way.
Best Regards,
Sharad